In this post, you set up an agent to act as a software application builder assistant using Amazon Bedrock Agents.
Agenttic workflows are a new perspective on building dynamic and complex business use case-based workflows, leveraging large language models (LLMs) as the inference engine or brain. These agent workflows break down natural language query-based tasks into multiple executable steps with iterative feedback loops and self-reflection, and use tools and APIs to produce final results.
Amazon Bedrock Agents help accelerate generative AI application development by coordinating multi-step tasks. Amazon Bedrock Agents uses the inference capabilities of the Foundation Model (FM) to break down user-requested tasks into multiple steps. A developer creates an orchestration plan using instructions provided by the developer, then executes the plan by calling the enterprise API and accessing the knowledge base using search extension generation (RAG). Provide the final response to the end user. This greatly increases use case flexibility, enables dynamic workflows, and reduces development costs. Amazon Bedrock Agents can help you customize and tune your apps to meet your specific project requirements while protecting your private data and securing your applications. These agents work in conjunction with AWS managed infrastructure functions and Amazon Bedrock to reduce infrastructure management overhead. Additionally, agents streamline workflows and automate repetitive tasks. Harness the power of AI automation to increase productivity and reduce costs.
Amazon Bedrock is a fully managed service that provides a selection of high-performance FM from leading AI companies such as AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API and broad functionality. Build generative AI applications with security, privacy, and responsible AI.
Solution overview
A three-tier software application typically has a UI interface layer, a business API middle layer (backend), and a database layer. Generate this post AI-based application builder assistant helps you accomplish tasks through all three tiers. The ability to generate and describe code snippets for the UI and backend layers in the language of your choice improves developer productivity and accelerates development of use cases. The agent can recommend best practices for software and architectural design using the AWS Well-Architected Framework for overall system design.
The agent can use database schema DDL (Data Definition Language for SQL) to generate SQL queries using natural language questions and execute them against database instances in the database tier.
This assistant uses Amazon Bedrock Agents with two knowledge bases. Amazon Bedrock Knowledge Bases inherently uses search augmented generation (RAG) technology. A typical RAG implementation consists of two parts:
- A data pipeline that ingests data from documents, typically stored in Amazon Simple Storage Service (Amazon S3), into a knowledge base, a vector database such as Amazon OpenSearch Serverless, that can be searched when a question is received.
- An application that receives a question from a user, searches the knowledge base for relevant information (context), creates a prompt containing the question and context, and provides it to LLM to generate a response.
The following diagram shows how the Application Builder Assistant acts as a coding assistant, recommending AWS design best practices, and assisting with SQL code generation.
Based on the three workflows in the diagram above, let’s explore the types of tasks required for different use cases.
- Usage example 1 – If you want to create a SQL query and validate it against the database, create the SQL query using an existing DDL schema configured as Knowledge Base 1. Below is an example of a user query.
- What is your total annual sales?
- What are the top 5 most expensive products?
- What is each employee’s total income?
- Usage example 2 – For design best practices recommendations, search the AWS Well-Architected Framework Knowledge Base (Knowledge Base 2). Below is an example of a user query.
- How do I design a secure VPC?
- What are some S3 best practices?
- Usage example 3 – You can write code, such as helper functions such as email validation, or use existing code. In this case, use prompt engineering techniques to call the default agent LLM to generate an email verification code. Below is an example of a user query.
- Create a Python function that validates the syntax of an email address.
- Please explain the following code in clear natural language.
$code_to_explain
(This variable is set using the code content of the selected code file. See the notebook for more information.)
Prerequisites
To run this solution in your AWS account, you must meet the following prerequisites:
- Clone the GitHub repository and follow the instructions in the README.
- ml.t3.medium Set up an Amazon SageMaker notebook on an Amazon Elastic Compute Cloud (Amazon EC2) instance. In this post, I provided an AWS CloudFormation template that is available in a GitHub repository. The CloudFormation template also provides the AWS Identity and Access Management (IAM) access required to set up the vector database, SageMaker resources, and AWS Lambda.
- Get access to models hosted on Amazon Bedrock. choose Manage access to models In the navigation pane of the Amazon Bedrock console, click and choose from the list of available options. This post uses Anthropic’s Claude v3 (Sonnet) on Amazon Bedrock and Amazon Titan Embeddings Text v2 on Amazon Bedrock.
implement the solution
The GitHub Repository Notebook covers the following learning objectives:
- Select the FM on which the agent is based.
- Create clear and concise agent instructions for using Base Agent LLM with one of the two knowledge bases. (An example is shown later in the post.)
- Create an action group and associate it with your API schema and Lambda function.
- Create, connect, and populate data into two knowledge bases.
- Create, start, test, and deploy agents.
- Generate UI and backend code using LLM.
- Use the AWS Well-Architected Framework guidelines to recommend AWS best practices for system design.
- Generate, execute, and validate SQL based on natural language understanding using LLM, a small number of examples, and a database schema as a knowledge base.
- Clean up agent resources and their dependencies using a script.
Agent instructions and user prompts
The instructions for the Application Builder Assistant agent are as follows:
Each user question to the agent includes the following system prompt by default:
Note: The following system prompts will remain the same for each agent invocation: {user_question_to_agent}
Replaced by user query.
Cost considerations
Important cost considerations include:
- With this current implementation, there are no separate charges for building resources using Amazon Bedrock Knowledge Base or Amazon Bedrock Agents.
- There are charges for invoking embedded and text models in Amazon Bedrock. For more information, see Amazon Bedrock Pricing.
- There are fees for using Amazon S3 and Vector DB. For more information, see Amazon S3 Pricing and Amazon OpenSearch Service Pricing, respectively.
cleaning
To avoid unnecessary costs, the implementation automatically cleans up resources after running the entire notebook. Check out the notebook instructions in the Cleanup Resources section to learn how to avoid automatic cleanup and try different prompts.
The order of resource cleanup is as follows:
- Disable an action group.
- Delete an action group.
- Delete an alias.
- Delete the agent.
- Delete the Lambda function.
- Empty your S3 bucket.
- Delete the S3 bucket.
- Delete the IAM role and policy.
- Delete a Vector DB collection policy.
- Delete a knowledge base.
conclusion
This post teaches you how to query and integrate workflows with Amazon Bedrock Agents using multiple knowledge bases, write and explain code, generate SQL using DDL schemas, and recommend design suggestions using AWS Well-Mail. We’ve shown you how to create a generative AI-based software application builder assistant. designed framework.
In addition to the code generation and code instructions provided in this post, to run and troubleshoot your application code in a secure test environment, see Setting Up a Code Interpreter with Amazon Bedrock Agents.
For more information about creating agents to coordinate workflows, see Amazon Bedrock Agents.
Acknowledgment
The authors would like to thank all reviewers for their valuable feedback.
About the author
Shayan Ray I’m an applied scientist at Amazon Web Services. His research interests include natural language in general (NLP, NLU, NLG, etc.). His research focuses on conversational AI, task-oriented dialogue systems, and LLM-based agents. His research publications are on natural language processing, personalization, and reinforcement learning.