Email remains an important communication channel for business customers, especially in human resources departments, where responding to inquiries can waste staff resources and cause delays. Manually responding to email inquiries can be difficult, as it requires extensive knowledge. In the future, advanced automation will play an important role in this field.
Generative AI allows businesses to improve the accuracy and efficiency of email management and automation. This technology enables automated responses and only requires manual human review in complex cases, streamlining operations and increasing overall productivity.
The combination of search augmentation generation (RAG) and knowledge base improves the accuracy of automated responses. RAG combines search-based and generation-based models to access databases and generate accurate, context-relevant responses. Accessing trusted information from a comprehensive knowledge base helps systems provide better responses. This hybrid approach ensures that automated replies are not only contextually appropriate but also factually correct, making communications more reliable and trustworthy.
This post shows you how to automate responses to email inquiries using fully managed services Amazon Bedrock Knowledge Bases and Amazon Simple Email Service (Amazon SES). By linking user queries to relevant company domain information, Amazon Bedrock Knowledge Bases provides personalized responses. Amazon Bedrock Knowledge Bases enables greater response accuracy and relevance by integrating foundational models (FM) with RAG’s internal data sources. Amazon SES is an email service that provides an easy way to send and receive email using your own email address and domain.
Search extension generation
RAG is an approach that integrates information retrieval into the natural language generation process. This includes two main workflows: data ingestion and text generation. The data ingestion workflow creates semantic embeddings of documents and questions and stores the document embeddings in a vector database. By comparing vector similarities and question embeddings, the text generation workflow selects the most relevant document chunks to enrich the prompt. The information obtained allows the model to generate more knowledgeable and accurate responses.
Amazon Bedrock Knowledge Base
For RAG workflows, Amazon Bedrock provides a managed knowledge base. It is a vector database that stores unstructured data semantically. This managed service simplifies deployment and scaling, allowing developers to focus on building RAG applications without worrying about infrastructure management. For more information about RAG and the Amazon Bedrock Knowledge Base, see Connect your underlying models to your company’s data sources using Amazon Bedrock’s agents.
Solution overview
The solution presented in this post automatically responds to email inquiries using the following solution architecture: Key features include enriching the RAG Support knowledge base with domain-specific documentation and automating email responses.
As shown in the architecture diagram, the workflow for populating the knowledge base consists of the following steps:
- Users upload company and domain-specific information, such as policy manuals, to an Amazon Simple Storage (Amazon S3) bucket.
- This bucket is designated as the knowledge base data source.
- Amazon S3 calls an AWS Lambda function to synchronize your data source and knowledge base.
- The Lambda function is
StartIngestionJob
API functions. A knowledge base divides the documents in a data source into manageable chunks so that they can be searched efficiently. The knowledge base is configured to use Amazon OpenSearch Serverless as the vector store and the Amazon Titan embedding text model on Amazon Bedrock to create the embedding. In this step, the chunks are converted to embeddings and stored in the vector index of the OpenSearch serverless vector store in Amazon Bedrock’s knowledge base, while also tracking the original document.
The workflow for automating email responses using knowledge-based generated AI includes the following steps:
- Customers send natural language email inquiries to addresses configured within your domain.
info@example.com
. - Amazon SES receives the email and sends the entire email content to your S3 bucket using the unique email identifier as the object key.
- The Amazon EventBridge rule is called when an email is received in your S3 bucket and starts an AWS Step Functions state machine to coordinate generating and sending an email response.
- The Lambda function retrieves the email content from Amazon S3.
- The email identifier and receipt timestamp are recorded in an Amazon DynamoDB table. You can use DynamoDB tables to monitor and analyze generated email responses.
- Using the body of the email inquiry, the Lambda function creates a prompt query and calls Amazon Bedrock.
RetrieveAndGenerate
API function that generates the response. - Amazon Bedrock Knowledge Bases uses the Amazon Titan embedding model to convert prompt queries into vector embeddings to find semantically similar chunks. The prompt is expanded with chunks retrieved from the vector store. It then sends the prompt along with additional context to a large-scale language model (LLM) for response generation. This solution uses Anthropic’s Claude Sonnet 3.5 on Amazon Bedrock as the LLM and uses additional context to generate user responses. Anthropic’s Claude Sonnet 3.5 is fast, affordable, and versatile, capable of handling a variety of tasks such as casual interaction, text analysis, summarization, and document question answering.
- The Lambda function constructs an email reply from the generated response and uses Amazon SES to send the email reply to the customer. Email tracking and processing information is updated in DynamoDB tables.
- If there’s no automated email response, the Lambda function forwards the original email to your internal support team, who reviews it and replies to the customer. Updates email disposition information in a DynamoDB table.
Prerequisites
To set up this solution, you must meet the following prerequisites:
- A local machine or virtual machine (VM) on which you can install and run AWS Command Line Interface (AWS CLI) tools.
- A local environment prepared for deploying the AWS Cloud Development Kit (AWS CDK) stack, as described in Getting Started with AWS CDK. You can bootstrap your environment as follows
cdk bootstrap aws://{ACCOUNT_NUMBER}/{REGION}
. - A valid domain name with configuration privileges. If your domain name is registered in Amazon Route 53 and managed by the same account, the AWS CDK configures Amazon SES for you. If your domain is managed elsewhere, some manual steps are required (detailed later in this post).
- Embedding and querying Amazon Bedrock models is now enabled. For more information, see Accessing Amazon Bedrock Foundation Models. The default configuration requires the following models to be enabled:
- Amazon Titan Text Embed V2
- Anthropic Claude 3.5 Sonnet
Deploy the solution
To deploy the solution, follow these steps:
- Configure your SES domain ID so that Amazon SES can send and receive messages.
If you want to receive email addresses for domains managed by Route 53, provide the following information and we’ll configure it for you:ROUTE53_HOSTED_ZONE
Context variables. If your domain is managed by another account or a registrar other than Route 53, see Creating and Verifying Identities in Amazon SES to manually verify your domain ID and create an MX record for Amazon SES email reception. Please publish and add the required MX records manually. Amazon SES receives email for your domain. - Clone the repository and move to the root directory.
- Install dependencies.
npm install
- Deploy and replace your AWS CDK app
{EMAIL_SOURCE}
Email address for receiving inquiries,{EMAIL_REVIEW_DEST}
An email address for internal review of messages that fail to auto-reply, and{HOSTED_ZONE_NAME}
With your domain name:
At this point, Amazon SES is configured with a verified domain ID in sandbox mode. You can now send email to addresses on that domain. If you need to send email to users with a different domain name, you must request production access.
Upload domain documents to Amazon S3
Now that you can run the knowledge base, you need to feed the vector store with the raw data you want to query. To do this, upload raw text data to an S3 bucket that will serve as your knowledge base data source.
- Find the bucket name from the AWS CDK output (
KnowledgeBaseSourceBucketArn/Name
). - Upload a text file through the Amazon S3 console or AWS CLI.
When testing this solution, we recommend using the following open source HR manual documentation: Upload the file to your Markdown or PDF folder. The knowledge base automatically synchronizes these files to the vector database.
Test the solution
To test your solution, send an email to the address defined in the “sourceEmail” context parameter. If you choose to upload a sample HR document, you can use the following example questions.
- “How many days of paid vacation can I take?”
- “Who do I report HR violations to?”
cleaning
There is a fee for implementing the solution. To clean up your resources, run the following command from your project’s folder:
conclusion
In this post, we discussed the important role of email as a communication channel for business users and the challenges of manual email responses. Our discussion outlined using the RAG architecture and the Amazon Bedrock Knowledge Base to automate email responses, resulting in improved HR prioritization and an enhanced user experience. Finally, we created a solution architecture and sample code in a GitHub repository for automatically generating and sending contextual email responses using a knowledge base.
For more information, see the Amazon Bedrock User Guide and the Amazon SES Developer Guide.
About the author
Darrin Webber is a Senior Solutions Architect at AWS, helping customers realize their cloud journeys with secure, scalable, and innovative AWS solutions. He has over 25 years of experience in architecture, application design and development, digital transformation, and Internet of Things. When Darrin isn’t transforming and optimizing his business with innovative cloud solutions, he enjoys hiking and playing pickleball.
Mark Luscher As a Senior Solutions Architect at AWS, he focuses on threat detection, incident response, and data protection to help enterprise customers succeed. His background is in networking, security, and observability. Previously, he held hands-on technical architecture and security positions in the healthcare sector as an AWS customer. Outside of work, Mark has three dogs, four cats, over 20 chickens, and hones his furniture making and woodworking skills.
matt richards As a Senior Solutions Architect at AWS, I support customers in the retail industry. A former AWS customer himself, he has a background in software engineering and solution architecture, and is now focused on helping other customers with their application modernization and digital transformation efforts. Outside of work, Matt has a passion for music, singing, and playing drums in several groups.