Amazon Rekognition People Paths is a machine learning (ML)-based feature in Amazon Rekognition Video that allows users to understand where, when, and how each person in a video moves. . This feature can be used for multiple use cases, including understanding:
- retail analysis – Identify in-store customer flow and high-traffic areas
- sports analysis – Player movements on the field or court
- industrial safety – Movement of workers in the work environment to facilitate compliance with safety protocols
After careful consideration, we have made the decision to discontinue Rekognition’s people tracking on October 31, 2025. New customers will no longer have access to this feature starting October 24, 2024, but existing customers can continue to use this feature as usual until October 31. , 2025.
This post describes an alternative solution for Rekognition person paths and how to implement this solution in your application.
Rekognition person path alternatives
One of Amazon Rekognition’s people path alternatives combines the open source ML model YOLOv9 used for object detection with the open source ByteTrack algorithm used for multi-object tracking.
Overview of YOLO9 and ByteTrack
YOLOv9 is the latest version of the YOLO object detection model series. It uses a specialized architecture called Generalized Efficient Layer Aggregation Network (GELAN) to efficiently analyze images. This model divides the image into a grid and quickly identifies and localizes objects within each section in a single pass. It then uses a technique called programmable gradient information (PGI) to adjust the results to improve accuracy, especially for objects that are easy to miss. This combination of speed and accuracy makes YOLOv9 ideal for applications that require fast and reliable object detection.
ByteTrack is an algorithm for tracking multiple moving objects in a video, such as people walking in a store. What makes this special is the way it handles objects that are easy yet difficult to detect. Even if someone is partially hidden or in a crowd, ByteTrack can often track them. It is designed to be fast and accurate, and works well even when many people are being tracked at the same time.
When you combine YOLOv9 and ByteTrack for people movement, you can see people’s movements throughout the video frame. YOLOv9 performs person detection in each video frame. ByteTrack takes these detections and correlates them across frames to create consistent tracks for each individual, showing how people move within the video over time.
code example
The following code example is a Python script that you can use as part of your AWS Lambda function or processing pipeline. You can also deploy YOLOv9 and ByteTrack for inference using Amazon SageMaker. SageMaker offers several options for model deployment, including real-time inference, asynchronous inference, serverless inference, and batch inference. You can choose the appropriate option based on your business requirements.
Here’s a high-level breakdown of how to run a Python script:
- Load the YOLOv9 model – This model is used to detect objects in each frame.
- Start the ByteTrack tracker – This tracker assigns a unique ID to an object and tracks it throughout the frame.
- Iterate through the video frame by frame – For each frame, the script repeats detecting the object, tracing its path, and drawing a bounding box and label around it. All of this is stored in a JSON file.
- Output the processed video – The final video is saved with all detected and tracked objects and each frame is annotated.
verification
Use the following video to demonstrate this integration. The video shows a football practice session, with the quarterback starting the play.
The following table shows an example of the content of a JSON file with timestamped person tracking output.
timestamp | person index | bounding box… | |||
height | left | top | width | ||
0 | 42 | 0.51017 | 0.67687 | 0.44032 | 0.17873 |
0 | 63 | 0.41175 | 0.05670 | 0.3148 | 0.07048 |
1 | 42 | 0.49158 | 0.69260 | 0.44224 | 0.16388 |
1 | 65 | 0.35100 | 0.06183 | 0.57447 | 0.06801 |
4 | 42 | 0.49799 | 0.70451 | 0.428963 | 0.13996 |
4 | 63 | 0.33107 | 0.05155 | 0.59550 | 0.09304 |
4 | 65 | 0.78138 | 0.49435 | 0.20948 | 0.24886 |
7 | 42 | 0.42591 | 0.65892 | 0.44306 | 0.0951 |
7 | 63 | 0.28395 | 0.06604 | 0.58020 | 0.13908 |
7 | 65 | 0.68804 | 0.43296 | 0.30451 | 0.18394 |
The video below shows the results of the person tracking output.
More open source solutions for people mobility
YOLOv9 and ByteTrack provide a powerful combination for people movement, but some other open source alternatives are also worth considering.
- DeepSORT – Popular algorithms that combine deep learning capabilities with traditional tracking methods
- Fair MOT – Integrates object detection and re-identification into a single network, giving users the ability to track objects in crowded scenes
These solutions can be effectively deployed using Amazon SageMaker for inference.
conclusion
This post provided an overview of how to test and implement YOLOv9 and Byte Track as an alternative to Rekognition People Pass. You can implement these open source tools in your applications by combining them with AWS tools products such as AWS Lambda and Amazon SageMaker.
About the author
Fangzhou Zheng I’m a senior applied scientist at AWS. He builds scientific solutions for AWS Rekgnition and AWS Monitron, delivering cutting-edge models to customers. His areas of focus include generative AI, computer vision, and time series data analysis.
Marcel Pividal is a Senior AI Services SA at the World Wide Specialist Organization, bringing over 22 years of expertise in transforming complex business challenges into innovative technology solutions. As a thought leader in generative AI implementation, he specializes in developing secure and compliant AI architectures for enterprise-scale deployments across multiple industries.