Benchmarking toolset that can plot trajectories and compute different metrics for VISLAM algorithms.
The directory examples/two_sessions contains an example dataset that has ground truth and VISLAM output trajectories for two sessions. You can run the benchmark for these using following command, and the output will be created under output directory:
python run.py -dataDir examples/two_sessionsSee the options with python run.py --help.
The run.py script can also calculate the output trajectories from sensor data recorded through Spectacular AI SDK. The format is documented here.
The recording folders should be placed in a common folder, say sessions/, for example like this:
sessions
├── session01
│ ├── groundtruth.jsonl
│ ├── calibration.json
│ ├── data.jsonl
│ └── data.mkv
├── session02
│ ├── vio_config.yaml
│ ├── calibration.json
│ ├── data.jsonl
│ └── data.mkv
├── session03
│ ├── output.jsonl
│ └── groundtruth.jsonl
└── vio_config.yaml
Then the benchmark can be run with:
pip install spectacularAI numpy scipy matplotlib
python run.py -dataDir sessionsand the results found in the output/ folder. Here are some details about the individual files in the example:
vio_config.yamlandcalibration.jsoncan be placed in the parentsessions/folder. In the example,session02would use thesession02/vio_config.yamland other sessions the shared config.- Ground truth poses for each session can either be placed in separate
groundtruth.jsonlfile or mixed into thedata.jsonl. - When
output.jsonlfile is present for a session it will be used for outputs and SDK replay is skipped.
Based on https://github.com/AaltoML/vio_benchmark.
This repository is licensed under Apache 2.0 (see LICENSE).