The app shows how to use the torch hub and make PyTorch inference in Savant using reference PyTorch model. Also, it shows how to interact with image in GPU memory. The YOLOP model is used for object detection and semantic segmentation.
Preview:
Tested on platforms:
- Nvidia Turing
- Nvidia Jetson Orin family
Demonstrated adapters:
- Video loop source adapter;
- Always-ON RTSP sink adapter.
git clone https://github.com/insight-platform/Savant.git
cd Savant
git lfs pull
./utils/check-environment-compatibleNote: Ubuntu 22.04 runtime configuration guide helps to configure the runtime to run Savant pipelines.
The demo uses models that are compiled into TensorRT engines the first time the demo is run. This takes time. Optionally, you can prepare the engines before running the demo by using the command:
# you are expected to be in Savant/ directory
./scripts/run_module.py --build-engines samples/panoptic_driving_perception/module.yml# you are expected to be in Savant/ directory
# if x86
docker compose -f samples/panoptic_driving_perception/docker-compose.x86.yml up
# if Jetson
docker compose -f samples/panoptic_driving_perception/docker-compose.l4t.yml up
# open 'rtsp://127.0.0.1:554/stream/panoptic-driving-perception' in your player
# or visit 'http://127.0.0.1:888/stream/panoptic-driving-perception/' (LL-HLS)
# Ctrl+C to stop running the compose bundleDownload the video file to the data folder. For example:
# you are expected to be in Savant/ directory
mkdir -p data && curl -o data/panoptic_driving_perception.mp4 \
https://eu-central-1.linodeobjects.com/savant-data/demo/panoptic_driving_perception.mp4Now you are ready to run the performance benchmark with the following command:
./samples/panoptic_driving_perception/run_perf.shNote: Change the value of the DATA_LOCATION variable in the run_perf.sh script if you changed the video.
