Today, we are excited to announce that John Snow Labs’ Medical LLM – Small and Medical LLM – Medium large-scale language models (LLMs) are now available on Amazon SageMaker Jumpstart. Medical LLM is optimized for the following medical language understanding tasks:
- Summary of clinical experience – Summary of discharge records, progress notes, radiology reports, pathology reports, and various other medical reports
- Answering questions about clinical records or biomedical research – Answer questions about the main diagnosis in the clinical setting, the tests ordered, or the study design and main results in the study summary
For physicians, this tool helps them quickly understand a patient’s medical journey and make timely, informed decisions from extensive documentation. This summarization feature not only increases efficiency, but also ensures that important details are not overlooked, thereby supporting optimal patient care and improving healthcare outcomes.
In a blind evaluation conducted by the John Snow Labs research team, Medical LLM – Small outperformed GPT-4o in summarizing medical text by 88% in factuality, 92% in clinical relevance, and brevity. is preferred by 68% more doctors. . The model also performed well in question answering clinical notes, with 46% or more preferred for factuality, 50% or more for relevance, and 44% or more for brevity. For question answering in biomedical research, this model was even more significantly favored. It scored 175% for factuality, 300% for relevance, and 356% for brevity. Notably, this small model performed comparably on an open-ended medical question-answering task despite being more than an order of magnitude smaller than competing models.
SageMaker JumpStart’s Medical LLM is available in two sizes: Medical LLM – Small and Medical LLM – Medium. This model is deployable on commodity hardware while providing state-of-the-art accuracy. This is important for healthcare professionals who need to process millions to billions of patient records without straining their computing budgets.
Both models support a context window of 32,000 tokens, which is approximately 50 pages of text. You can try out your model using SageMaker JumpStart. SageMaker JumpStart is a machine learning (ML) hub that provides access to algorithms, models, and ML solutions to help you get started with ML. This post explains how to discover and deploy Medical LLM – Small using SageMaker JumpStart.
About the Jon Snow Institute
John Snow Labs, a healthcare AI company, provides cutting-edge software, models, and data to help healthcare and life sciences organizations leverage AI. John Snow Labs is the developer of Spark NLP, Healthcare NLP, and Medical LLM. The company’s award-winning medical AI software powers some of the world’s largest pharmaceutical companies, academic medical centers, and medical technology companies. John Snow Labs’ Medical Language Models is the most widely used natural language processing (NLP) library ever by medical professionals (Gradient Flow, The NLP Industry Survey 2022, and The Generative AI in Healthcare Survey 2024) ).
John Snow Labs’ cutting-edge AI models for clinical and biomedical language understanding include:
- Medical language model. Consists of over 2,400 pre-trained models for analyzing clinical and biomedical texts.
- Visual language model focused on understanding visual documents and forms
- Peer-reviewed, state-of-the-art accuracy on a variety of common medical language comprehension tasks
- Tested for robustness, fairness, and bias
What is SageMaker JumpStart?
SageMaker JumpStart allows you to choose from a wide selection of publicly available foundational models (FMs). ML practitioners can deploy FM on a dedicated Amazon SageMaker instance from a network-isolated environment and customize models for model training and deployment using SageMaker. You can now discover and deploy Healthcare LLM – Small Scale models programmatically with a few clicks in Amazon SageMaker Studio or through the SageMaker Python SDK. This allows you to derive control over model performance and machine learning operations (MLOps) using SageMaker features such as Amazon SageMaker Pipelines. , Amazon SageMaker Debugger, or container logs. Models are deployed in a secure environment in AWS and under the control of a Virtual Private Cloud (VPC), which helps provide data security. Medical LLM – Small models are currently available for deployment and inference in SageMaker Studio.
Discover Medical LLM – SageMaker JumpStart Small Model
You can access FM through SageMaker JumpStart in the SageMaker Studio UI and the SageMaker Python SDK. This section describes how to discover models in SageMaker Studio.
SageMaker Studio is an integrated development environment (IDE) that provides a single web-based visual interface with access to dedicated tools for all ML development steps, from data preparation to building, training, and deploying ML models. can be executed. For more information about how to get started and set up SageMaker Studio, see Amazon SageMaker Studio.
SageMaker Studio provides access to SageMaker JumpStart, which includes pre-trained models, notebooks, and pre-built solutions. Pre-built automated solutions.
From the SageMaker JumpStart landing page, you can browse different hubs named after model providers to find different models. The Medical LLM – Small model can be found in the John Snow Labs hub (see screenshot below). If you don’t see the Medical LLM – Small model, update your version of SageMaker Studio by shutting down and restarting. For more information, see Shut down and update Studio Classic apps.
You can also find the Medical LLM – Small model by searching for “John Snow Labs” in the search field.
Select a model card to view details about the model, including its license, data used for training, and how the model is used. There are also two options for deploying the model. expand and Notebook previewThis will deploy the model and create the endpoint.
Subscribe to Medical LLM – Small Model on AWS Marketplace
This model requires an AWS Marketplace subscription. when choosing expand SageMaker Studio prompts you to subscribe to an AWS Marketplace listing if you don’t already have one. If you are already subscribed, please select expand.
If you don’t have an active AWS Marketplace subscription, Subscribe. You are redirected to the AWS Marketplace listing. Please check the terms of use and select accept offer.
After you successfully subscribe to a model on AWS Marketplace, you can deploy the model to SageMaker JumpStart.
Deploying Medical LLM – Small Model with SageMaker JumpStart
when choosing expand SageMaker Studio begins the deployment.
You can monitor the progress of the deployment on the details page for the redirected endpoint.
On the details page for the same endpoint, test reasoning The tab allows you to send test inference requests to the deployed model. This is useful for verifying that the endpoint responds to requests as expected. The following prompt uses context and subsequent questions to ask the Medical LLM – Small model and review the resulting response. It also includes performance metrics such as execution time.
You can also test summary responses for medical texts.
Deploy the model and run inference through notebooks
Alternatively, you can choose open Inside JupyterLab Deploy the model through a sample notebook. The sample notebook provides end-to-end guidance on how to deploy models for inference and clean up resources. You can set additional parameters as needed, but SageMaker JumpStart allows you to quickly deploy and run inference using the included code.
The notebook already contains the code necessary to deploy your model to SageMaker with default configurations, such as the default instance type and default VPC configuration. You can change these configurations by specifying non-default values. JumpStartModel
. See the API documentation for more information.
After you deploy your model, you can run real-time or batch inference on the deployed endpoints. This notebook includes sample code and instructions for both.
cleaning
After the notebook finishes running, delete all resources created in the process.
When you deploy an endpoint from the SageMaker Studio console, you can selectively delete the endpoint. erase Endpoint details page.
If you want to completely unsubscribe from a model package, you must unsubscribe from the product in AWS Marketplace.
- Move to. machine learning Click the Software Subscriptions page tab.
- Find the list you want to cancel the subscription for, Cancel your subscription To cancel your subscription.
Please complete these cleanup steps to avoid ongoing charges for your model.
conclusion
In this post, we showed you how to get started with the first healthcare-specific model currently available in SageMaker JumpStart. Check out SageMaker JumpStart for SageMaker Studio to get started today. For more information, see the following resources:
About the author
art tuazon I am a Solutions Architect on the CSC team at AWS. She supports both AWS partners and customers with technical best practices. In my free time, I enjoy running and cooking.
Bo Tse I’m a Partner Solutions Architect at AWS. He is passionate about enabling partners on AWS, with a focus on supporting AWS partners through their partner journey. In my free time, I enjoy traveling and dancing.
david talby John Snow Labs is Chief Technology Officer, helping companies apply artificial intelligence to solve real-world problems in healthcare and life sciences. He was named USA CTO of the Year in 2022 by the Global 100 Awards and Game Changers Awards.