This post was co-authored by Anthony Medeiros, Manager of Solutions Engineering and Architecture for North American Artificial Intelligence, and Adrian Boeh, Senior Data Scientist for NAM AI at Schneider Electric.
Schneider Electric is a global leader in the digital transformation of energy management and automation. The company specializes in providing integrated solutions that enable energy security, reliability, efficiency and sustainability. Schneider Electric serves a wide range of industries, including smart manufacturing, resilient infrastructure, future-proofing data centers, intelligent buildings, and intuitive homes. The company offers products and services including power distribution, industrial automation, and energy management. Schneider Electric’s innovative technology, wide range of products and commitment to sustainability have positioned us as a leading player in driving smart, green solutions for the modern world.
As the demand for renewable energy continues to rise, Schneider Electric faces high demand for sustainable microgrid infrastructure. This request will be submitted in the form of a Request for Proposal (RFP). Each request for proposal must be manually reviewed by Schneider’s microgrid subject matter experts (SMEs). We found that manually reviewing each RFP was too costly and did not scale to meet industry needs. To solve this problem, Schneider turned to Amazon Bedrock and generative artificial intelligence (AI). Amazon Bedrock is a fully managed service that provides a selection of high-performance foundational models (FM) from leading AI companies, including AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon, through a single API. Ability to build generative AI applications with security, privacy, and responsible AI.
In this post, we show how Schneider’s team worked with the AWS Generative AI Innovation Center (GenAIIC) to build a generative AI solution on Amazon Bedrock to solve this problem. The solution processes and evaluates each RFP and routes high-value RFPs to microgrid SMEs for approval and recommendation.
Raising a problem
Microgrid infrastructure is a key component to the growing renewable energy market. Microgrids include on-site power generation and storage that allows the system to be disconnected from the main grid. Schneider Electric offers several key products that help customers build microgrid solutions to make homes, schools, and manufacturing centers more sustainable. Increased public and private investment in this field has led to a dramatic increase in the number of RFPs for microgrid systems.
RFP documents contain technically complex textual and visual information, such as scopes of work, parts lists, and electrical diagrams. Moreover, they can be hundreds of pages long. The following diagram shows some examples of RFP documents. Due to the size and complexity of RFPs, reviewing them is costly and labor intensive. Typically, an experienced SME should review the RFP in its entirety and evaluate its applicability and convertibility to the business.

Sample request for proposal (RFP) input data
To add further complexity, the same set of RFP documents may be evaluated by multiple business units within Schneider. Each department may be looking for different requirements to close deals relevant to its sales team.
Given the size and complexity of the RFP document, the Schneider team needed a way to quickly and accurately identify opportunities where Schneider products could provide a competitive advantage and high conversion potential. Failure to respond to viable opportunities may result in revenue loss. Also, if a company does not have a clear competitive edge, it will spend resources on proposals, leading to an inefficient use of time and effort.
We also needed a solution that could be reused in other business units to extend the impact across the enterprise. If they can successfully handle the influx of RFPs, the Schneider team will not only be able to scale their microgrid business, but also help businesses and industries adopt new renewable energy paradigms.
Amazon Bedrock and Generative AI
To solve this problem, the Schneider team turned to generative AI and Amazon Bedrock. Large-scale language models (LLMs) are enabling more efficient business processes with their ability to identify and summarize specific categories of information with human-like accuracy. The volume and complexity of RFP documents made them ideal candidates for using generative AI for document processing.
Amazon Bedrock allows you to build and scale generative AI applications with a wide range of FM. Amazon Bedrock is a fully managed service that includes Amazon’s FM and third-party models that support a variety of use cases. For more information about the available FMs, see Amazon Bedrock Supported Foundation Models. Amazon Bedrock allows developers to create unique experiences using generative AI capabilities that support a wide range of programming languages and frameworks.
This solution uses Amazon Bedrock’s Anthropic Claude, specifically the Anthropic Claude Sonnet model. For most workloads, Sonnet is 2x faster than Claude 2 and Claude 2.1 and has a higher level of intelligence.
Solution overview
Traditional search augmentation and generation (RAG) systems cannot determine the relevance of an RFP document to a specific sales team due to the large list of one-time business requirements and large classifications of electrical components and services. exists in the document.
Other existing approaches require either expensive domain-specific fine-tuning to the LLM or the use of noise and data element filtering, which has suboptimal performance and scalability impacts.
Instead, the AWS GenAIC team worked with Schneider Electric to package business objectives into LLM through multiple prisms of semantic transformation: concepts, features, and components. For example, in the field of smart grids, the underlying business objectives might be defined as resilience, isolation, and sustainability. Therefore, the corresponding functions would include energy generation, consumption, and storage. The following diagram shows these components.

Microgrid semantic component
Concept-driven information extraction approaches are similar to ontology-based prompts. This allows engineering teams to customize and extend the initial list of concepts to different domains of interest. Decomposing complex concepts into specific features allows LLM to discover, interpret, and extract relevant data elements.
LLM was asked to read the RFP and obtain quotes related to the defined concepts and features. These estimates substantiated the existence of electrical equipment that met high-level objectives and were used as key evidence of downstream relevance for the RFP and original sales team.
For example, in the following code, the term BESS stands for Battery Energy Storage System and embodies evidence of electricity storage.
In the following example, the term EPC indicates the presence of a solar power plant.
The entire solution includes three phases:
- Document chunking and preprocessing
- Get an LLM-based quote
- Summary and evaluation of LLM-based estimates
The first step uses standard document chunks and Schneider’s proprietary document processing pipeline to group similar text elements into a single chunk. Each chunk is processed by the Citation Search LLM, which identifies relevant citations within each chunk, if available. This brings relevant information to the forefront and filters out irrelevant content. Finally, relevant citations are compiled and fed into the final LLM, which summarizes the RFP and determines the overall relevance of the microgrid family’s RFP. The following diagram shows this pipeline.
Final decisions regarding the RFP will be made using the following prompt structure. The details of the actual prompt are proprietary, but the structure includes:
- First, provide the LLM with a brief description of the business unit in question.
- Next, define your persona and tell LLM where to find the evidence.
- Provide classification criteria for RFP.
- Specify the output format, including:
- single yes, no, perhaps
- Relevance score from 1 to 10.
- Explainability.
As a result, a relatively large corpus of RFP documents is compressed into a focused, concise, and informative representation by accurately capturing and returning the most important aspects. This structure allows SMEs to quickly filter for specific LLM labels, and summary citations provide a better understanding of which citations are driving the LLM decision-making process. This way, Schneider SME teams can spend less time reading through pages of RFP proposals and instead focus their attention on the content that matters most to the business. The sample below shows both classification results and qualitative feedback for a sample RFP.
Internal teams are already experiencing the benefits of the new AI-driven RFP assistant.
“At Schneider Electric, we are committed to solving real-world problems by creating a new, sustainable and digital electric future. We are leveraging AI and LLM to drive the digital transformation of our company. further strengthen and accelerate energy sector efficiency and sustainability.”
– Anthony Medeiros, Manager, Solutions Engineering and Architecture, Schneider Electric.
conclusion
In this post, the AWS GenAIIC team collaborated with Schneider Electric to demonstrate the amazing common features of LLM available in Amazon Bedrock to help sales teams and optimize their workloads.
The RFP Assistant solution enabled Schneider Electric to achieve 94% accuracy in the task of identifying microgrid opportunities. By making slight adjustments to the prompts, the solution can be extended and adopted in other business areas.
Precisely guided prompts help teams derive a clear, objective perspective from the same set of documents. The proposed solution allows RFPs to be viewed through the interchangeable lens of different business units pursuing different objectives. These previously hidden insights have the potential to uncover new business prospects and create supplemental revenue streams.
These capabilities will enable Schneider Electric to seamlessly integrate AI-powered insights and recommendations into daily operations. This integration facilitates an informed, data-driven decision-making process that streamlines operational workflows to increase efficiency, improve the quality of customer interactions, and ultimately deliver a superior experience. will be done.
About the author
Anthony Medeiros He is the Manager of Solutions Engineering and Architecture at Schneider Electric. He specializes in delivering high-value AI/ML initiatives to many business functions within North America. With 17 years of experience at Schneider Electric, he brings a wealth of industry knowledge and technical expertise to the team.
Adrian Boe is a senior data scientist working on advanced data tasks in Schneider Electric’s North American Customer Transformation organization. Adrian has 13 years of experience at Schneider Electric, is AWS Machine Learning certified, and has a proven ability to innovate and improve organizations using data science methodologies and technologies.
Costa Belts He is a senior applied scientist at the AWS Generative AI Innovation Center, where he helps customers design and build generative AI solutions to solve key business problems.
Dan Volk is a data scientist in the AWS Generative AI Innovation Center. He has 10 years of experience in machine learning, deep learning, and time series analysis, and holds a master’s degree in data science from the University of California, Berkeley. He is passionate about leveraging cutting-edge AI technology to transform complex business challenges into opportunities.
Negin Sokandan She is a Senior Applied Scientist in the AWS Generative AI Innovation Center, where she works on building generative AI solutions for AWS strategic customers. Her research background is in statistical inference, computer vision, and multimodal systems.