This post was co-authored with Bria’s Bar Fingerman.
We are pleased to announce that Bria AI’s Bria 2.3, 2.2 HD, and 2.3 Fast text-to-image foundational models (FM) are now available in Amazon SageMaker JumpStart. Bria models are trained using only commercial-grade licensed data, providing high standards of safety and compliance with full legal coverage.
These advanced models from Bria AI are high-quality, contextual, and ready-to-use for marketing, design, and image generation use cases in a variety of industries, from e-commerce, media and entertainment, and gaming to consumer packaged goods and retail. Generate relevant visual content.
This post describes Bria’s model family, describes the Amazon SageMaker platform, and explains how to use SageMaker JumpStart to discover, deploy, and perform inference on Bria 2.3 models.
Overview of Bria 2.3, Bria 2.2 HD, and Bria 2.3 Fast
Bria AI offers a family of high-quality visual content models. These advanced models represent the cutting edge of generative AI technology for image creation.
- Bria 2.3 – Core models provide high-quality visual content with great photorealism and detail, and can produce stunning images with complex concepts in a variety of art styles, including photorealism.
- Bria 2.2 HD – Optimized for high resolution, Bria 2.2 HD delivers high-definition visual content that meets the demanding needs of high-definition applications, displaying crisp and clear details.
- Bria 2.3 Fast – Optimized for speed, Bria 2.3 Fast produces faster, higher-quality visuals and is ideal for applications that require quick processing times without sacrificing quality. Using your model on the SageMaker g5 instance type provides faster latency and throughput (compared to Bria 2.3 and Bria 2.2 HD), with the p4d instance type providing twice the latency of g5 instances.
SageMaker JumpStart overview
SageMaker JumpStart allows you to choose from a wide selection of publicly available FMs. ML practitioners can deploy FM from a network-isolated environment to a dedicated SageMaker instance and customize models for model training and deployment using SageMaker. You can now discover and deploy Bria models in Amazon SageMaker Studio or programmatically through the SageMaker Python SDK. This enables you to derive control over model performance and machine learning operations (MLOps) using SageMaker features such as Amazon SageMaker Pipelines, Amazon SageMaker Debugger, and container logs.
Models are deployed in a secure environment in AWS and under the control of a Virtual Private Cloud (VPC), which helps provide data security. Bria models are currently available for deployment and inference in SageMaker Studio in 22 AWS Regions where SageMaker JumpStart is available. Bria models require g5 and p4 instances.
Prerequisites
To try out Bria models using SageMaker JumpStart, you need the following prerequisites:
Discover the Bria model with SageMaker JumpStart
You can access FM through SageMaker JumpStart and the SageMaker Python SDK in the SageMaker Studio UI. This section shows you how to discover models in SageMaker Studio.
SageMaker Studio is an IDE that provides a single web-based visual interface with access to dedicated tools to perform all ML development steps, from preparing data to building, training, and deploying ML models. For more information about how to get started and set up SageMaker Studio, see Amazon SageMaker Studio.
SageMaker Studio allows you to selectively access SageMaker JumpStart. jump start in the navigation pane or by selecting jump start in house page.
On the SageMaker JumpStart landing page, you can find pre-trained models from popular model hubs. Search for Bria and the search results will list all available Bria model variants. This post uses the Bria 2.3 Commercial Text-to-image model.
Select a model card to view details about the model, including its license, data used for training, and how the model is used. Also, you have two options. expand and Notebook previewdeploy the model, and create the endpoint.
Subscribe to the Bria model on AWS Marketplace
when choosing expandIf your model is not already subscribed, you must first subscribe before you can deploy it. Demonstrates the subscription process for the Bria 2.3 Commercial Text-to-image model. You can repeat the same steps to subscribe to other Bria models.
After selecting SubscribeClick to be redirected to the model overview page where you can read model details, pricing, usage, and other information. choose Continue subscribing Accept the offer on the next page to complete your subscription.
Configure and deploy the Bria model using AWS Marketplace
The configuration page allows you to choose from three different startup methods. In this post, I will show you how to use the SageMaker console.
- for Available launch methodsselect SageMaker console.
- for regionplease select your desired region.
- choose View with Amazon SageMaker.
- for Model nameenter a name (e.g.
Model-Bria-v2-3
). - for Role of IAMselect an existing IAM role or create a new role with the SageMaker full access IAM policy attached.
- choose Next.The recommended instance types for this model endpoint are ml.g5.2xlarge, ml.g5.12xlarge, ml.g5.48xlarge, ml.p4d.24xlarge, and ml.p4de.24xlarge. To deploy this model, ensure that you have account-level service limits for one or more of these instance types. For more information, see Request a Quota Increase.
- in variations section, choose one of the recommended instance types provided by Bria, such as ml.g5.2xlarge.
- choose Creating an endpoint configuration.
A success message is displayed when the endpoint configuration is successfully created. - choose Next Create an endpoint.
- in Creating an endpoint section, enter the endpoint name (e.g.
Endpoint-Bria-v2-3-Model
) Please select submit.Once the endpoint is successfully created, it appears on the SageMaker Endpoints page in the SageMaker console.
Configure and deploy Bria models using SageMaker JumpStart
If your Bria model is already subscribed on AWS Marketplace, you can choose one of the following: expand Configure the endpoint on the model card page.
On the endpoint configuration page, SageMaker pre-populates the endpoint name, recommended instance type, number of instances, and other details. You can modify and select them based on your requirements. expand Create an endpoint.
Once the endpoint is successfully created, the status will appear as follows: In service.
Run inference in SageMaker Studio
You can test the endpoint by passing a sample inference request payload in SageMaker Studio, or you can use a SageMaker notebook. This section shows you how to use SageMaker Studio.
- In the navigation pane of SageMaker Studio, endpoint under introduction.
- Select the Bria endpoint you just created.
- in test reasoning tab, test the endpoint by sending a sample request.
You can see the response on the same page as shown in the following screenshot.
Text-to-image generation using SageMaker notebooks
You can also use SageMaker notebooks to perform inference against endpoints deployed using the SageMaker Python SDK.
The following code starts the endpoint created using SageMaker JumpStart.
The model response is in base64 encoded format. The following functions help decode base64 encoded images and display them as images.
Below is a sample payload with a text prompt to generate an image using the Bria model.
Example prompt
Bria 2.3 text-to-image models operate like standard image generation models, with the model processing an input sequence and outputting a response. This section provides some example prompts and sample output.
Use the following prompts:
- Photo, Dynamic, City, Professional Mail Skateboarder, Sunglasses, Teal and Orange Shades
- A young woman with flowing curly hair stands on a subway platform, illuminated by the purple and cyan colors and the bright lights of a speeding train.
- Close-up of a bright blue and green parrot perched on a tree branch inside a cozy, bright room
- Light speed motion with blue and purple neon colors and buildings in the background
The model produces the following image.
Below is an example prompt for generating an image using the text prompts described above.
cleaning
Once the notebook has finished running, delete all resources created in the process to stop billing. Use the following code:
conclusion
With the availability of Bria 2.3, 2.2 HD, and 2.3 Fast on SageMaker JumpStart and AWS Marketplace, businesses can now enhance their visual content creation process with advanced generative AI capabilities. These models offer a balance of quality, speed, and compliance, making them valuable assets for organizations looking to maintain an edge in the competitive environment.
Bria’s commitment to responsible AI and SageMaker’s robust security framework provide enterprises with a complete package of responsible AI models for data privacy, regulatory compliance, and commercial use. Additionally, the unified experience leverages the capabilities of both platforms to simplify MLOps, data storage, and real-time processing.
For more information about using FM with SageMaker JumpStart, see Training, Deploying, and Evaluating Pretrained Models with SageMaker JumpStart, JumpStart Foundation Models, and Getting Started with Amazon SageMaker JumpStart. please.
Explore the Bria model with SageMaker JumpStart today and revolutionize your visual content creation process!
About the author
bar finger man I’m Bria’s Head of AI/ML Engineering. He will lead the development and optimization of core infrastructure, enabling the company to scale cutting-edge generative AI technology. Bar leads the engineering group in deploying, managing, and securing scalable AI/ML cloud solutions, with a focus on designing high-performance supercomputers for large-scale AI training. He works closely with leadership and cross-functional teams to align business objectives while driving innovation and cost efficiency.
Supriya Pragendra I’m a Senior Solutions Architect at AWS. She has over 15 years of IT experience in software development, design, and architecture. She will support data, generative AI, and AI/ML initiatives for key customer accounts. She is passionate about data-driven AI and the deep field of ML and generative AI.
Rodrigo Merino I’m a Generative AI Solutions Architect Manager at AWS. With over a decade of experience deploying emerging technologies from generative AI to IoT, Rodrigo guides customers across a variety of industries to accelerate their AI/ML and generative AI journeys. He specializes in helping organizations train and build models and operate end-to-end ML solutions on AWS. Rodrigo’s expertise lies in bridging the gap between cutting-edge technologies and real-world business applications, enabling companies to exploit the full potential of AI and drive innovation in their respective fields.
Eliad Maimon Senior Startup Solutions Architect at AWS with a focus on generative AI startups. He helps startups accelerate and scale their AI/ML efforts by coaching them through deep learning model training and deployment on AWS. Passionate about AI and entrepreneurship, Eliad is committed to driving innovation and growth in the startup ecosystem.