Authored by:
- Vignesh Subramanian | VP of Product Management for Platform Technology, Infor
- Yogesh Dhimate | Senior Partner Solutions Architect, AWS
- Brandon Blincoe | Senior Partner Solutions Architect, AWS
- Vishesh Harsh Jha | Senior Solutions Architect, AWS
Generative artificial intelligence (AI) has become a game-changer for businesses looking for improved productivity. This technology is set to transform the software and hardware landscape as we know it. Emerging capabilities, like large language models (LLMs) and foundation models (FMs), can understand and generate human-like text and are surpassing average human performance across a wide range of cognitive tasks. In this blog, the first in a series of blog posts co-authored by Infor® and the Amazon Web Services® (AWS®) technical teams, we discuss how Infor is using AWS generative AI services to enhance productivity for our customers. We will discuss challenges and opportunities where AI can solve industry-specific use cases. Finally, we will discuss how AWS and Infor are uniquely positioned to provide generative AI capabilities to solve industry problems.
History of the Infor-AWS AI partnership
In 2013, Infor made the strategic decision to build and run its applications on AWS. This partnership has grown significantly over the years, with Infor becoming part of the AWS Partner Network (APN), attaining multiple AWS competencies and building over 15 solutions on AWS. Our investment in each other as partners has grown significantly over the years.
Since 2017, Infor has incrementally built robust offerings in AI and machine learning (ML) leveraging AWS services. In the same year, Infor launched its first AI offering, Coleman, which consisted of an ML platform called Infor AI built on Amazon SageMaker. This platform provided predictive and prescriptive AI solutions for deep industry problems, such as inventory intelligence, price forecasting, product up-sell/cross-sell, supply chain intelligence, and vendor rating. Additionally, Infor complemented the Coleman platform with a conversational chatbot leveraging Amazon Lex.
Building on this experience with traditional AI and ML, Infor is now looking to bring generative AI capabilities to its customers. To this end, Infor has selected Amazon Bedrock, a fully managed service that offers a choice of industry-leading FMs and a broad set of capabilities to build generative AI applications. This partnership allows Infor to deepen its AWS collaboration, build industry-specific AI capabilities accessible to all customers, and rapidly expand its generative AI solution offerings to drive value for its user base.
Generative AI has captured the fascination of the mainstream, with its chat-like experience making the technology relatable to a wide audience. However, the true business value of this technology lies in solving industry-specific problems with the help of domain knowledge. Infor’s deep industry knowledge and AWS’s technology prowess come together to provide instant value to mutual Infor and AWS customers.
To this end, we analyzed how Infor products were being used, considered external trends, and identified the following three core spheres within an enterprise—which translates to top-line, bottom-line, and intangible impacts:
- Enhance customer experiences
- Boost employee productivity and creativity
- Optimize business processes
To build a scalable and secure mechanism to deliver these capabilities, Infor has created an internal platform, called the Infor GenAI Platform. This platform provides a standardized interface for the generative AI-powered capabilities, enabling Infor's product teams to build new customer experiences in their respective Infor CloudSuite™ environments without the need to reinvent the wheel for infrastructure, authentication, and integration with generative AI tools. The interface for the Infor GenAI Platform is built using the Infor OS Platform which provides core services such as security, application programming interface (API), and a standardized way of interacting with AWS.
How the Infor Gen AI Platform is built
The generative AI-powered core capabilities offered to the Infor engineering teams to build features in Infor CloudSuite, through the Infor GenAI Platform, are as follows:
Infor GenAI Embedded Experiences
Infor GenAI Embedded Experiences provide generative AI capabilities within Infor CloudSuites, allowing applications to generate and analyze various types of content for customers. This includes assisted rich text authoring, summarization, comparison, contextualization, and language translations—all available directly within Infor's industry-specific CloudSuite applications. This capability can also help analyze non-textual content like images and documents. Infor GenAI Embedded Experiences is integrated across Infor CloudSuite applications, enabling each CloudSuite to provide features such as generating detailed product descriptions, automating the creation of job descriptions, simplifying document processing and analysis, comparing and summarizing vendor contracts, and much more.
Each CloudSuite connects to Infor GenAI Embedded Experiences’ interface and gets the ability to use industry-leading FMs offered on Amazon Bedrock. To ensure these generative AI features are tailored to the specific needs of each CloudSuite, Infor's product teams have implemented customized prompt templates within their CloudSuite applications.
Fig. 1: Connectivity between CloudSuites and Infor GenAI Embedded Experiences
Infor GenAI Assistant
Infor GenAI Assistant provides a conversational user experience for enterprise data and orchestration of various tasks. For example, a business user may need to check inventory levels or approve an order, which may require navigating multiple applications within the CloudSuite. Infor GenAI Assistant can dynamically resolve a user's intent across multiple systems of record or data sources, curate and contextualize the responses, and finally present them in a personalized way. Users are able to engage in multi-turn conversations with this assistant, which retains context throughout. This assistant will orchestrate various actions within the CloudSuite on the user’s behalf to intelligently deliver on the user’s objective.
Due to the scale of CloudSuites, this posed a considerable engineering challenge. This was solved by taking a decentralized approach, where CloudSuite engineering teams would “onboard” their tools/APIs into the Infor GenAI Assistant. The orchestrator within the Assistant utilized these tools to map a user's intent to the required sequence of actions. With the help of the industry-leading FMs on Amazon Bedrock, the Infor GenAI Assistant is able to perform various tasks needed to successfully achieve the users’ intended results.
Fig. 2: Infor GenAI Assistant architecture overview
Infor GenAI Knowledge Hub
Infor GenAI Knowledge Hub is a self-service documentation hub, which allows businesses to unlock knowledge from various data sources, including product manual documents, user guides, human resources (HR) and legal policy documents, wiki pages, support tickets, and more.
It does this by using a Retrieval-Augmented Generation (RAG) approach. The GenAI Knowledge Hub ingests various knowledge sources, splits them into smaller chunks, and vectorizes them by using a text-to-embedding model offered on Amazon Bedrock. These vector representations and their original chunks are then stored in an Amazon OpenSearch Index, which becomes a vector database.
Infor GenAI Knowledge Hub has implemented access control on various indexed knowledge sources and their vector representations to prevent accidental disclosures.
When a search query comes in, Infor GenAI Knowledge Hub retrieves the relevant vectors and plain texts to form context and then uses an FM on Amazon Bedrock to generate an answer for the user query. Infor GenAI Knowledge Hub provides a search interface that is used by various CloudSuite applications as per their information discovery use-case.
Fig. 3: Infor GenAI Knowledge Hub architecture overview
Data privacy, security, accuracy, and ethical use of AI
The adoption of generative AI is nuanced and requires careful consideration. As an emerging area of technology, it has gained some ignominy for bias and hallucinations. Another big concern for most customers is the privacy of data. To mitigate these concerns Infor leverages its secure, multi-tenant architecture on AWS where data is isolated from one tenant to another. Infor leverages AWS services such as AWS Identity and Access Management (IAM), AWS Web Application Firewall (WAF), AWS Shield, AWS Key Management Service (KMS), and other services to secure, protect, and encrypt customer data. The use of AI within Infor’s applications follows the same security model, with data being secure by design. Customer data is not stored nor used to train AI models. Infor and AWS are committed to the ethical and responsible use of AI.
Infor GenAI features are designed with a “human-in-the-middle” approach. Any generated text or responses from AI must be reviewed by a business user before being consumed. Business users gain productivity benefits while also having full control to only apply outputs they deem appropriate.
Conclusion
Infor’s generative AI approach is systematic, focusing on the needs of our business users to enhance Infor’s Industry AI portfolio. In October 2024, Infor made its Infor GenAI capabilities available in all major AWS regions. This latest set of capabilities continues to empower organizations with predictive, prescriptive, and generative AI to drive greater personalization, productivity, and innovation for every user, in any industry.
To learn more, visit Infor GenAI or join us at our upcoming Infor Velocity Summit in Amsterdam on October 22–23, 2024.