Skip to content

Latest commit

 

History

History

README.md

Traffic meter demo

NB: The demo optionally uses YOLO models which take up to 10-15 minutes to compile to TensorRT engine. The first launch may take a decent time.

The pipeline detects when people cross a user-configured line and the direction of the crossing. The crossing events are attached to individual tracks, counted for each source separately and the counters are displayed on the frame. The crossing events are also stored with Graphite and displayed on a Grafana dashboard.

Pedestrians preview:

Vehicles preview:

Article on Medium: Link

Tested on platforms:

  • Nvidia Turing, Ampere
  • Nvidia Jetson Orin family

Demonstrated operational modes:

  • real-time processing: RTSP streams (multiple sources at once);

Demonstrated adapters:

  • Video loop adapter;
  • Always-ON RTSP sink adapter;

Prerequisites

git clone https://github.com/insight-platform/Savant.git
cd Savant
git lfs pull
./utils/check-environment-compatible

Note: Ubuntu 22.04 runtime configuration guide helps to configure the runtime to run Savant pipelines.

Build Engines

The demo uses models that are compiled into TensorRT engines the first time the demo is run. This takes time. Optionally, you can prepare the engines before running the demo by using the command:

# you are expected to be in Savant/ directory

./samples/traffic_meter/build_engines.sh

Run Demo

# you are expected to be in Savant/ directory

# if x86
docker compose -f samples/traffic_meter/docker-compose.x86.yml up

# if Jetson
docker compose -f samples/traffic_meter/docker-compose.l4t.yml up

# open 'rtsp://127.0.0.1:554/stream/town-centre-processed' in your player
# or visit 'http://127.0.0.1:888/stream/town-centre-processed/' (LL-HLS)

# for pre-configured Grafana dashboard visit
# http://127.0.0.1:3000/d/WM6WimE4z/entries-exits?orgId=1&refresh=5s

# Ctrl+C to stop running the compose bundle

To create a custom Grafana dashboard, sign in with admin/admin credentials.

Switch Detector Model

The sample includes an option to choose the model used for object detection. Choose between NVIDIA peoplenet, YOLOv8, YOLOv11 and YOLOv4 by changing the env variable in .env file:

  • DETECTOR=peoplenet for peoplenet
  • DETECTOR=yolov8m for yolov8m
  • DETECTOR=yolov8s for yolov8s
  • DETECTOR=yolov4 for yolov4
  • DETECTOR=yolov11s for yolov11s
  • DETECTOR=yolov11n for yolov11n

Performance Measurement

Download the video file to the data folder. For example:

# you are expected to be in Savant/ directory

mkdir -p data && curl -o data/AVG-TownCentre.mp4 \
   https://eu-central-1.linodeobjects.com/savant-data/demo/AVG-TownCentre.mp4

Now you are ready to run the performance benchmark with the following command:

./samples/traffic_meter/run_perf.sh

Note: Change the value of the DATA_LOCATION variable in the run_perf.sh script if you changed the video.

Note: yolov8s detector is set by default.