Wednesday, July 3, 2024
HomeArtificial Intelligence (AI) NewsBuild a self-service digital assistant using Amazon Lex and Amazon Bedrock knowledge...

Build a self-service digital assistant using Amazon Lex and Amazon Bedrock knowledge base

Organizations strive to implement efficient, scalable, and cost-effective automated customer support solutions without compromising on customer experience. Generative Artificial Intelligence (AI)-powered chatbots play a key role in enabling human-like interactions by providing responses from a knowledge base without the need for a live agent. These chatbots can be efficiently leveraged to handle common inquiries, allowing live agents to focus on more complex tasks.

Amazon Lex provides advanced conversational interfaces using voice and text channels, with natural language understanding to more accurately identify user intent and fulfill user intent faster.

Amazon Bedrock simplifies the process of developing and scaling generative AI applications that leverage large language models (LLMs) and other foundational models (FMs). It offers access to a wide range of FMs from leading providers such as Anthropic Claude, AI21 Labs, Cohere, and Stability AI, as well as Amazon’s own Amazon Titan model. Additionally, Amazon Bedrock Knowledge Bases enable you to develop applications that leverage the power of Retrieval Augmented Generation (RAG), an approach that increases a model’s ability to generate contextually appropriate and informed responses by retrieving relevant information from data sources.

The generative AI capabilities of QnAIntent for Amazon Lex allow you to securely connect FM to your RAG enterprise data. QnAIntent provides an interface to use enterprise data and FM on Amazon Bedrock to generate relevant, accurate, and contextual responses. You can use QnAIntent with new or existing Amazon Lex bots to automate FAQs through text and voice channels such as Amazon Connect.

This feature eliminates the need to create variations of intents, sample utterances, slots, and prompts to predict and handle a wide range of FAQs. Simply connect QnAIntent to your company’s knowledge sources and your bot can immediately handle questions using the authorized content.

In this post, I show you how to build a chatbot using QnAIntent that connects to a knowledge base in Amazon Bedrock (leveraging Amazon OpenSearch Serverless as a vector database) to build a rich, self-service conversational experience for your customers.

Solution overview

This solution uses Amazon Lex, Amazon Simple Storage Service (Amazon S3), and Amazon Bedrock in the following steps:

  1. Users interact with the chatbot through a pre-built Amazon Lex Web UI.
  2. Each user request is processed by Amazon Lex to determine the user’s intent through a process called intent recognition.
  3. Amazon Lex provides a built-in generative AI capability, QnAIntent, that can connect directly to your knowledge base to fulfill user requests.
  4. The Amazon Bedrock knowledge base uses the Amazon Titan embedding model to convert the user query into a vector, which is then queried against the knowledge base to find chunks that are semantically similar to the user query. The user prompt is augmented with the results returned from the knowledge base as additional context and sent to the LLM to generate a response.
  5. The generated response is returned through QnAIntent and sent back to the user in the chat application through Amazon Lex.

The following diagram shows the solution architecture and workflow.

The following sections provide more detail about the main components of the solution and provide high-level steps for implementing the solution.

  1. Create a knowledge base in Amazon Bedrock for OpenSearch Serverless.
  2. Create an Amazon Lex bot.
  3. Create a new generative AI-powered intent in Amazon Lex using the built-in QnAIntent and specify your knowledge base.
  4. Deploy the sample Amazon Lex Web UI available in the GitHub repository. Configure the bot using the provided AWS CloudFormation template in your desired AWS Region.

Prerequisites

To implement this solution, you will need the following:

  1. An AWS account with permissions to create AWS Identity and Access Management (IAM) roles and policies. For more information, see Overview of Access Management: Permissions and Policies.
  2. Familiarity with AWS services such as Amazon S3, Amazon Lex, Amazon OpenSearch Service, and Amazon Bedrock.
  3. Access has been enabled for the Amazon Titan Embeddings G1 – Text model and Anthropic Claude 3 Haiku on Amazon Bedrock. For instructions, see Model Access.
  4. Data Source in Amazon S3,In this article, we use Amazon shareholder documents (Amazon Shareholder Letters – 2023 and 2022) as the data source to enrich our knowledge base.

Create a knowledge base

To create a new knowledge base in Amazon Bedrock, follow these steps: For more information, see Creating a Knowledge Base.

  1. On the Amazon Bedrock console, Knowledge Base In the navigation pane.
  2. choose Create a knowledge base.
  3. upper Provide knowledge base details On the page, enter the knowledge base name, IAM permissions, and tags.
  4. choose Next.
  5. for Data Source NameAmazon Bedrock pre-populates an auto-generated Data Source Name, but you can change it if necessary.
  6. Keep the data source location in the same AWS account. Browse S3.
  7. Select the S3 bucket where you uploaded the Amazon shareholder documents. choose.
    This populates the S3 URI, as shown in the following screenshot.
  8. choose Next.
  9. Choose an embedding model to vectorize your document. In this article, Titan Embedding G1 – Text v1.2.
  10. select Quickly create a new vector store Create a default vector store using OpenSearch Serverless.
  11. choose Next.
  12. Verify the configuration and create the knowledge base.
    Once your knowledge base is created successfully, you will be provided with a knowledge base ID, which you will need when creating your Amazon Lex bot.
  13. choose Synchronization Index the document.

Create an Amazon Lex Bot

To create a bot, follow these steps:

  1. In the Amazon Lex console, Bots In the navigation pane.
  2. choose Create a bot.
  3. for How to make, select Create a blank bot.
  4. for Bot Nameenter a name (e.g. FAQBot).
  5. for Runtime Rolesselect Create a new IAM role with basic Amazon Lex permissions To access other services on your behalf.
  6. Set the remaining settings as needed, Next.
  7. upper Add a language to your bot The page allows you to choose from different supported languages.
    In this post, English (US).
  8. choose end.

    Once your bot is created successfully, you will be redirected to create a new intent.
  9. Add an utterance for a new intent, Intention of preservation.

Add QnAIntent to the intent

To add a QnAIntent, follow these steps:

  1. In the Amazon Lex console, navigate to the intent that you created.
  2. upper Add an intent Select from the drop-down menu Use built-in intents.
  3. for Built-in intent, choose AMAZON.QnAIntent – ​​GenAI Features.
  4. for Intent Nameenter a name (e.g. QnABotIntent).
  5. choose addition.

    Once you add the QnAIntent, you will be redirected to configure your knowledge base.
  6. for Select your modelchoose Anthropological and Claude 3 Haiku.
  7. for Select Knowledge Storeselect Amazon Bedrock Knowledge Base Enter the Knowledge Base ID.
  8. choose Intention of preservation.
  9. Once you have saved the intent, build Build the bot.
    It should be displayed Successfully built A message will appear when the build is complete.
    You can now test the bot in the Amazon Lex console.
  10. choose test Launches a draft version of your bot in a chat window within the console.
  11. Enter your question and get an answer.

Deploying the Amazon Lex Web UI

The Amazon Lex Web UI is a pre-built, fully featured web client for your Amazon Lex chatbot. It eliminates the hassle of recreating your chat UI from scratch. It enables you to quickly deploy features and minimizes time to value for your chatbot-powered applications. To deploy the UI, follow these steps:

  1. Follow the instructions in the GitHub repository.
  2. Before you deploy the CloudFormation template, LexV2BotId and LexV2BotAliasId Set the template values ​​based on the chatbots you’ve created in your account.
  3. Once the CloudFormation stack is deployed successfully, WebAppUrl Values ​​from the stack output tab.
  4. Test the solution in your browser by navigating to the Web UI.

cleaning

To avoid unnecessary charges in the future, clean up the resources that you created as part of this solution.

  1. If you created an Amazon Bedrock knowledge base and data in an S3 bucket specifically for this solution, delete them.
  2. Delete the Amazon Lex bot that you created.
  3. Delete the CloudFormation stack.

Conclusion

In this post, we discussed the importance of generative AI-powered chatbots in customer support systems. We then provided an overview of QnAIntent, a new Amazon Lex capability designed to connect FMs to company data. Finally, we demonstrated a practical use case of setting up a Q&A chatbot to analyze Amazon shareholder documents. This implementation not only provides fast and consistent customer service but also allows live agents to apply their expertise to solve more complex issues.

Stay up to date with the latest advancements in Generative AI and start building on AWS. If you need help getting started, visit the Generative AI Innovation Center.


About the Author

Supriya Pragandra He is a Sr. Solutions Architect at AWS with 15+ years of IT experience in software development, design, and architecture. He helps key customer accounts with their Data, Generative AI, and AI/ML efforts. He is passionate about data-driven AI and the deep areas of ML and Generative AI.

Manjula Nagini He is a Sr. Solutions Architect with AWS based in New York. He works with leading financial services institutions to architect and modernize large scale applications while adopting AWS cloud services. He is passionate about designing cloud-centric big data workloads. He has 20+ years of IT experience in software development, analytics and architecture across multiple domains including finance, retail and telecommunications.

Mani Kanuja She is a technical lead for generative AI specialists, author of the book “Applied Machine Learning and High Performance Computing on AWS” and a member of the Women in Manufacturing Education Foundation Board. She leads machine learning projects in a variety of areas including computer vision, natural language processing and generative AI. She has spoken at internal and external conferences including AWS re:Invent, Women in Manufacturing West, YouTube webinars and GHC 23. In her free time, she enjoys long runs along the beach.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments

error: Content is protected !!