Governments and non -profit organizations that evaluate the proposal of subsidies face important issues. Each sifts hundreds of detailed submissions with unique benefits and identifies the most promising initiative. This difficult and time -consuming process is usually the first step in the subsidy management process and is important to promote meaningful social impacts.
AWS’s social responsibility and impact (SRI) team has acknowledged the opportunity to enhance this function using the generated AI. The team has developed an innovative solution to rationalize the proposal reviews and ratings of subsidies using Amazon Bedrock’s natural language processing (NLP) functions. Amazon Bedrock is a full -managed service that allows you to use the selection of high -performance basic models (FMS) from major AI companies such as AI21 Labs, Anthropic, Cohere, Metral AI, Stavility AI, Amazon. It has a wide range of functions necessary to build a generated AI application with security, privacy, and responsible AI.
Historically, the AWS Health Equity Initiative application was manually reviewed by the Review Committee. Each cycle took more than 14 days before all applications were completely reviewed. On average, the program received 90 applications for each cycle. June 2024 AWS Health Equity Initiative Application Cycle has received 139 applications, the largest inflow so far. It took 21 days for the Review Committee to manually handle many of these applications. The Amazon Bedrock Centered approach reduced the review time to two days (reduced by 90 %).
The goal was to enhance the efficiency and consistency of the review process so that customers could build more impactful solutions. By combining the advanced NLP function of Amazon Bedrock with a thoughtful and quick engineering, the team is dynamic, data -driven, and fair to demonstrate the possibility of conversion of large -scale language models (LLMs) in the social impact domain. I created a solution.
This post will investigate the details and important learning of the team’s Amazon Bedrock Powered Grant Proposal Review Solutions, and provide a blue copy to an organization that optimizes the subsidy management process.
Build an effective prompt for reviewing a subsidy proposal using the generated AI
Prompt engineering is a technology that creates an effective prompt for generating and guiding AI models such as LLMS. By designing the prompt in mind, practitioners can solve the maximum potential of the generated AI system and apply it to a wide range of real world scenarios.
When building an Amazon rock model prompt and confirming the proposal of the subsidy, it was confirmed that the response of the model was adjusted, structured, and executed using multiple quick engineering methods. This includes a specific persona to assign a specific persona, a step -by -step order, and specifies the target output format.
First, we allocated a public health expert persona to the model. This context helps to evaluate proposals from the perspective of the theme expert (SME) who are comprehensively considering global issues and community levels. By clearly defining a persona, it is confirmed that the response of the model is adjusted according to the target evaluation lens.
To explain various perspectives, you can assign multiple personas to the same Lubrick. For example, when a persona “Public Health subject expert” was assigned, the model provided sharp insights based on the possibility and evidence of the project. When the Persona’s “Venture Capitalist” was assigned, the model provided a more robust feedback on the sustainable plan for the organization’s clear milestones and post funds. Similarly, when the Persona’s “Software Development Engineer” was assigned, the model relied on the theme of the proposed use of AWS technology.
Next, the review process was disassembled and a series of instructions were structured to follow the model. This includes proposal reviews, specific aspects (potential, innovation, feasibility, sustainability, sustainability), and provide an overall summary and score. By drawing these stepped direct contours, the model provides a clear guidance on the necessary task elements and helps create a comprehensive and consistent evaluation.
Finally, the desired output format was specified as JSON using a clear section, a overall summary, and a clear section for the overall score. This structured response format prescriptions confirm that the output of the model can be consumed, preserved, and analyzed, rather than distributing model outputs in free textbooks. This level of control to output help rationalizes the use of the model evaluation downstream.
By combining these quick engineering methods (roles assignment, step -by -step orders, output format), creating a prompt that brings out a thorough, objective and practical subsidy proposal evaluation from the generated AI model. I have done it. With this structured approach, you can effectively use the functions of the model to support the Grand Trade Review Process in a scalable and efficient way.
Building a dynamic proposal review application using StreamLit and generated AI
To demonstrate and test dynamic proposal reviews, we have built a quick prototype implementation using StreamLit, Amazon Bedrock, and Amazon Dynamodb. It is important to note that this implementation is not intended to use production, but to function as a proof of concept and further development. With this application, users can define and save various personas and rating lubricks. This can be dynamically applied to confirm the submission of the proposal. This approach enables associated assessments adjusted to each proposal based on the specified criteria.
The application architecture consists of several important components described in this section.
The team has saved the Persona, Rubric, and submitted proposals using the NOSQL database, Dynamodb. The data stored in DynamoDB has been sent to the web application interface, Riremlit. In Streamlit, the team added Persona and Rubric to prompts and sent a prompt to Amazon Bedrock.
Amazon Bedrock has evaluated the proposal proposed to the prompt using human Claude 3 Sonnet FM. The model prompt is dynamically generated based on the selected persona and rubrick. Amazon Bedrock returns the evaluation results to Riremlit for team reviews.
The following figure shows the show in the previous figure.
The workflow consists of the following steps:
- Users can create and manage personas and lubricks through the RetryLid application. These are stored in the dynamodb database.
- When a user submits a review proposal, select the desired persona and rubrick from the available options.
- The StreemID application generates a dynamic prompt for the Amazon Bedrock model that incorporates the details of the selected Persona and Lubrick.
- The Amazon rock model evaluates proposals based on dynamic prompts and returns the evaluation results.
- The evaluation results are stored in the dynamodb database and presented to the user via the RREATMINT application.
Impact
This quick prototype shows the possibility of a scalable and flexible proposal review process, and the organization enables the following:
- Up to 90 % of application processing time
- Revival of the review process by automating the evaluation task
- Capture structured data on proposals and evaluation for further analysis
- Incorporate a variety of perspectives by enabling multiple persona and lubricks
Through the implementation, the AWS SRI team focused on creating interactive and user -friendly experience. By practicing streamlined applications and observing the impact of dynamic pelosa and lubrick selection, users can gain practical experience to build AI -driven applications that deal with actual issues.
Consideration for the implementation of production grade
A quick prototype demonstrates the possibility of this solution, but the implementation of the production grade requires additional considerations and additional AWS services. Some important considerations are as follows:
- Scalability and performance -Server -less architecture using AWS Lambda, Amazon API Gateway, Dynamodb, and Amazon Simple Storage Service (Amazon S3) improves scalability, availability and reliability to handle large amounts of users at the same time.
- Security and compliance -In addition to the sensitivity of the related data, additional security measures such as encryption, authentication, access control, audit, etc. are required. Services such as AWS Key Management Service (KMS), Amazon Cognito, AWS ID and Access Management (IAM), and AWS CloudTrail will help you meet these requirements.
- Surveillance and logging -If you use services such as Amazon CloudWatch and AWS X -rays, you can use robust monitoring and logging mechanism implementation tracking performance, identification of problems, and maintaining compliance.
- Automatic test and development -Aws CodePipeline, AWS Codebuild, AWS CodeDePloy, implementation of automatic tests and deployment pipelines provide consistent deployment and reduce errors and downtime risks.
- Cost optimization -Drowing cost -optimization strategies, such as using AWS Cost Explorer and AWS budget, can help you manage costs and maintain efficient resources.
- Responsible AI consideration -Amazon Bedrock Guardrails implementation and surveillance mechanisms can help implement responsible ethical use of generated AI models, such as bias detection, content modeling, and human surveillance. The AWS Health Equity Initiative Application form has collected customer information such as names, email addresses, operational countries, etc., but it was sent to Amazon Bedrock -compatible tools, avoiding model bias and protecting customer data. It was systematically omitted.
By using AWS services perfect suite, security, scalability, and best practices for responsible AI, the organization is produced that meets specific requirements while achieving compliance, reliability and cost effects. You can build a solution.
Conclusion
Amazon Bedrock combined with effective and quick engineering to confirm the proposal for subsidies for AWS SRI, and provided customers in a few days, not a week. Skills developed in this project are very transferable and applied to a wide range of industries and use cases, such as building a web application equipped with RESTRYLIT, integrating it with a NOSQL database such as Dynamodb, and customizing the generated AI prompts. Masu.
About the author
Caroline will AWS’s social responsibility and impactful customer engagement global lead. She is promoting strategic initiatives that utilize cloud computing for social impacts around the world. She, a passionate advocate of a community without a service, co -founded two non -profit organizations that provide services to individuals with developmental disabilities and their families. Caroline enjoys mountain adventure with his family and friends during the free time.
Lauren Horis AWS’s social responsibility and impact program manager. She uses economics, healthcare research, and technology backgrounds for mission -led organizations to provide social impacts using AWS cloud technology. Lauren enjoys reading the piano and cello performances during her free time.
Benwest A practical builder with experience in machine learning, big data analysis, and full stack software development. As a technical program manager of AWS Social Districal & Impact Team, BEN has developed innovative prototypes and supports the world’s organizations to have a positive influence on the world (and the Internet of various clouds, edges, and things. IoT) Technology is used. Ben is a veteran of the army who enjoys cooking and enjoys being outdoors.
Mike Hagati I am a senior system development engineer (Sr. Sysde) of Amazon Web Services (AWS) and work in the Pace-Edge team. In this role, he contributes to the AWS edge computing initiative as part of the pace of the world’s public sector (WWPS) organization (prototyping and customer engineering). Beyond his occupation, Mike is a volunteer for pet therapy and provides support services at local community facilities, along with his dog’s gnocchi.