Today, we are announcing the general availability of Amazon Bedrock prompt management. This includes new features that provide expanded options for configuring prompts and enable seamless integration for invoking prompts in generative AI applications.
Amazon Bedrock prompt management simplifies the creation, evaluation, versioning, and sharing of prompts, helping developers and prompt engineers get better responses from their use case foundation models (FMs). In this post, we review the key features of Amazon Bedrock prompt management and provide examples of how you can use these tools to optimize prompt performance and output for your specific use case.
New features in Amazon Bedrock prompt management
Amazon Bedrock Prompt Management offers new features that simplify the process of building generative AI applications.
- structured prompts – Define system instructions, tools, and additional messages when creating prompts.
- Converse and InvokeModel API integration – Invoke cataloged prompts directly from Amazon Bedrock Converse and InvokeModel API calls
To introduce the new addition, let’s look at an example of creating a prompt to summarize financial statements.
Create a new prompt
To create a new prompt, follow these steps:
- In the navigation pane of the Amazon Bedrock console, builder toolschoose quick management.
- choose Creating a prompt.
- Enter name and description and select create.
Create a prompt
Customize your prompts using the prompt builder.
- for System manualdefine the role of the model. For this example, type:
You are an expert financial analyst with years of experience in summarizing complex financial documents. Your task is to provide clear, concise, and accurate summaries of financial reports.
- text prompt User message box.
You can create variables by enclosing the name in double curly braces. Later, you can pass the values of these variables during the call, and the values will be inserted into the prompt template. This post uses the following prompts:
- Configure the tool with . Tool settings Function call section.
You can define tools with names, descriptions, and input schemas that allow your model to interact with external functions to extend its functionality. Provides a JSON schema containing tool information.
When using function calls, LLM does not use tools directly. Instead, it shows the tools and parameters needed to use it. Users must implement logic that calls the tool based on the model’s requests and feeds the results back to the model. For more information, see Complete Amazon Bedrock Model Response Using Tools.
- choose keep Click to save your settings.
Compare prompt variations
You can create and compare multiple versions of prompts to find the best version for your use case. This process can be manually customized.
- choose Variant comparison.
- The original variant is already set. You can manually add new variants by specifying the number you want to create.
- User messages, system instructions, tool configurations, and additional messages can be customized for each new variant.
- You can create different variants for different models. choose Please select a model Select a specific FM to test each variant.
- choose run all Compare the output from all prompt variants of the selected model.
- If your variant performs better than the original, you can choose one of the following: replace the original prompt Click to update the prompt.
- in prompt builder page, selection Create a version Save the updated prompt.
This approach allows you to fine-tune your prompts to your specific model or use case, making it easy to test and improve your results.
call prompt
To invoke prompts from your application, you can now include the prompt identifier and version as part of the Amazon Bedrock Converse API call. The following code is an example using the AWS SDK for Python (Boto3).
I passed the prompt Amazon Resource Name (ARN) as a separate parameter to the model ID parameter and prompt variable. Amazon Bedrock loads prompt versions directly from the prompt management library and executes calls with no latency overhead. This approach enables direct prompt invocation through Converse or the InvokeModel API, eliminating manual retrieval and formatting and simplifying workflows. It also allows your team to reuse and share prompts and keep track of different versions.
For more information about using these features, including required permissions, see heightI document.
You can invoke prompts in other ways as well.
currently available
Amazon Bedrock Prompt Management is now generally available in the US East (N. Virginia) and US West (Oregon) AWS Regions. For pricing information, see Amazon Bedrock Pricing.
conclusion
The general availability of Amazon Bedrock Prompt Management introduces powerful features that power the development of generative AI applications. By providing a centralized platform for creating, customizing, and managing prompts, developers can streamline their workflow and work to improve prompt performance. The ability to define system instructions, configure tools, and compare prompt variations allows teams to create effective prompts tailored to specific use cases. With seamless integration to the Amazon Bedrock Converse API and support for popular frameworks, organizations can now easily build and deploy AI solutions that are more likely to produce relevant outputs.
About the author
dani mitchell I’m a Generative AI Specialist Solutions Architect at AWS. He focuses on computer vision use cases and helps EMEA companies accelerate their ML and generative AI journeys using Amazon SageMaker and Amazon Bedrock.
Ignacio Sanchez Spatial and AI/ML Specialist Solutions Architect at AWS. He combines his skills in augmented reality and AI to help companies improve the way people interact with technology and make technology more accessible and enjoyable for end users.