NB: The demo optionally uses YOLO models which take up to 10-15 minutes to compile to TensorRT engine. The first launch may take a decent time.
The pipeline detects when people cross a user-configured line and the direction of the crossing. The crossing events are attached to individual tracks, counted for each source separately and the counters are displayed on the frame. The crossing events are also stored with Graphite and displayed on a Grafana dashboard.
Pedestrians preview:
Vehicles preview:
Article on Medium: Link
Tested on platforms:
- Nvidia Turing, Ampere
- Nvidia Jetson Orin family
Demonstrated operational modes:
- real-time processing: RTSP streams (multiple sources at once);
Demonstrated adapters:
- Video loop adapter;
- Always-ON RTSP sink adapter;
git clone https://github.com/insight-platform/Savant.git
cd Savant
git lfs pull
./utils/check-environment-compatibleNote: Ubuntu 22.04 runtime configuration guide helps to configure the runtime to run Savant pipelines.
The demo uses models that are compiled into TensorRT engines the first time the demo is run. This takes time. Optionally, you can prepare the engines before running the demo by using the command:
# you are expected to be in Savant/ directory
./samples/traffic_meter/build_engines.sh# you are expected to be in Savant/ directory
# if x86
docker compose -f samples/traffic_meter/docker-compose.x86.yml up
# if Jetson
docker compose -f samples/traffic_meter/docker-compose.l4t.yml up
# open 'rtsp://127.0.0.1:554/stream/town-centre-processed' in your player
# or visit 'http://127.0.0.1:888/stream/town-centre-processed/' (LL-HLS)
# for pre-configured Grafana dashboard visit
# http://127.0.0.1:3000/d/WM6WimE4z/entries-exits?orgId=1&refresh=5s
# Ctrl+C to stop running the compose bundleTo create a custom Grafana dashboard, sign in with admin/admin credentials.
The sample includes an option to choose the model used for object detection. Choose between NVIDIA peoplenet, YOLOv8, YOLOv11 and YOLOv4 by changing the env variable in .env file:
DETECTOR=peoplenetfor peoplenetDETECTOR=yolov8mfor yolov8mDETECTOR=yolov8sfor yolov8sDETECTOR=yolov4for yolov4DETECTOR=yolov11sfor yolov11sDETECTOR=yolov11nfor yolov11n
Download the video file to the data folder. For example:
# you are expected to be in Savant/ directory
mkdir -p data && curl -o data/AVG-TownCentre.mp4 \
https://eu-central-1.linodeobjects.com/savant-data/demo/AVG-TownCentre.mp4Now you are ready to run the performance benchmark with the following command:
./samples/traffic_meter/run_perf.shNote: Change the value of the DATA_LOCATION variable in the run_perf.sh script if you changed the video.
Note: yolov8s detector is set by default.

