This is the official repository for ETAP: Event-based Tracking of Any Point, accepted at CVPR 2025, by Friedhelm Hamann, Daniel Gehrig, Filbert Febryanto, Kostas Daniilidis and Guillermo Gallego.
- The first event-only point tracking method with strong cross-dataset generalization.
- Robust tracking in challenging conditions (high-speed motion, lighting changes).
- Evaluation benchmark based on six event camera datasets: EDS, EC, EVIMO2, EventKubric, E2D2 and PennAviary
![]() Example 1: Synthetic dataset EventKubric |
![]() Example 2: Feature Tracking on EDS |
![]() Example 3: E2D2 |
![]() Example 4: EVIMO2 |
- Quickstart Demo
- Installation
- Model Selection
- Evaluation Tasks
- Synthetic Data Generation
- Acknowledgements
- Citation
- Additional Resources
- License
The quickest way to try ETAP is using our demo:
-
Clone the repository:
git clone https://github.com/tub-rip/ETAP.git cd ETAP
-
Download the model weights and save to
weights/ETAP_v1_cvpr25.pth
-
Download the demo example (30MB) and extract to
data/demo_example
-
Run the demo:
python scripts/demo.py
This demo requires only basic dependencies: torch
, numpy
, tqdm
, matplotlib
, imageio
, and pillow
. No dataset preprocessing needed!
-
Clone the repository:
git clone [email protected]:tub-rip/ETAP.git cd ETAP
-
Set up the environment:
conda create --name ETAP python=3.10 conda activate ETAP
-
Install PyTorch (choose a command compatible with your CUDA version from the PyTorch website), e.g.:
conda install pytorch torchvision pytorch-cuda=12.1 -c pytorch -c nvidia
-
Install other dependencies:
pip install -r requirements.txt
Download the pre-trained model and move it into the folder <repo-root>/weights/
.
To reproduce the paper results, use the model ETAP_v1_cvpr25.pth
.
The four evaluation sequences of the "Event-aided Direct Sparse Odometry Dataset" (EDS) can be downloaded in two ways:
Option 1
Download the four evaluation sequences of the "Event-aided Direct Sparse Odometry Dataset" (EDS) from the official web page:
01_peanuts_light
02_rocket_earth_light
08_peanuts_running
14_ziggy_in_the_arena
Choose the archive file option, which contains the events as an hdf5 file. Place all sequences in a common folder.
Option 2: Use our download script:
bash scripts/download_eds.sh
We also use the calibration data provided by EDS. No action is required, as it is included in this repository at config/misc/eds/calib.yaml
. This is the same file as in the 00_calib
results from the official source.
The evaluation was introduced in DDFT. As with the calibration data, we have hardcoded the ground truth tracks at /config/misc/eds/gt_tracks
, so no additional steps are necessary. If you are interested in how the tracks are created, please refer to the DDFT repository.
Create a symbolic link to your data root into <repository-root>/data
, or alternatively you can change the paths in the config files. The setup should look something like this:
data/eds/
├── 01_peanuts_light
│ └── events.h5
├── 02_rocket_earth_light
│ └── events.h5
├── 08_peanuts_running
│ └── events.h5
└── 14_ziggy_in_the_arena
└── events.h5
Download the five evaluation sequences of the "Event Camera Dataset" (EC) from the official source. Download the option Text (zip)
. Unzip the sequences into a folder structure like this:
data/ec/
├── boxes_rotation
│ ├── calib.txt
│ ├── events.txt
│ ├── groundtruth.txt
│ ├── images
│ ├── images.txt
│ └── imu.txt
├── boxes_translation
│ ├── events.txt
│ ├── ...
├── shapes_6dof
│ ├── events.txt
│ ├── ...
├── shapes_rotation
│ ├── events.txt
│ ├── ...
└── shapes_translation
├── events.txt
├── ...
As with EDS, the ground truth tracks are from the evaluation introduced in DDFT but we have included them at config/misc/ec/gt
for convenience.
Preprocess the data by transforming the raw events into event stacks with the following commands:
# For EDS dataset
python scripts/prepare_event_representations.py --dataset eds --config config/exe/prepare_event_representations/eds.yaml
# For EC dataset
python scripts/prepare_event_representations.py --dataset ec --config config/exe/prepare_event_representations/ec.yaml
Run the tracking inference with:
python scripts/inference_online.py --config config/exe/inference_online/feature_tracking.yaml
Run the benchmarking script to evaluate the tracking results:
python scripts/benchmark_feature_tracking.py feature_tracking_eds_ec
-
Download the required EVIMO2 sequences from the official source. You only need Motion Segmentation / Object Recognition sequences for the samsung_mono camera in .npz format (2.4GB). Unzip them and move them into the data directory.
-
Download the precomputed tracks here and merge them into the data directory.
The result should look like this:
data/evimo/
└── samsung_mono
└── imo
└── eval
├── scene13_dyn_test_00_000000
│ ├── dataset_classical.npz
│ ├── dataset_depth.npz
│ ├── dataset_events_p.npy
│ ├── dataset_events_t.npy
│ ├── dataset_events_xy.npy
│ ├── dataset_info.npz
│ ├── dataset_mask.npz
│ └── dataset_tracks.h5
├── scene13_dyn_test_05_000000
│ ├── dataset_classical.npz
... ...
- Precompute the event stacks:
python scripts/prepare_event_representations.py --dataset evimo2 --config config/exe/prepare_event_representations/evimo2.yaml
Run inference and evaluation with a single command:
python scripts/inference_offline.py --config config/exe/inference_offline/evimo2.yaml
If you want to generate the point tracks yourself instead of using the precomputed ones:
python scripts/create_evimo2_track_gt.py --config config/misc/evimo2/val_samples.csv --data_root data/evimo2
- Download the event_kubric test set and move it to the data directory:
data/event_kubric
└── test
├── sample_000042
│ ├── annotations.npy
│ ├── data.hdf5
│ ├── data_ranges.json
│ ├── events
│ │ ├── 0000000000.npz
│ │ ├── 0000000001.npz
│ │ ...
│ ├── events.json
│ └── metadata.json
├── sample_000576
│ ├── annotations.npy
│ ...
...
- Prepare the event stacks:
python scripts/prepare_event_representations.py --dataset event_kubric --config config/exe/prepare_event_representations/event_kubric.yaml
Run inference and evaluation with a single command:
python scripts/inference_offline.py --config config/exe/inference_offline/event_kubric.yaml
- Download the E2D2 fidget spinner sequence and move it to the data directory:
data/e2d2/
└── 231025_110210_fidget5_high_exposure
├── gt_positions.npy
├── gt_timestamps.npy
├── queries.npy
└── seq.h5
- Prepare the event stacks:
python scripts/prepare_event_representations.py --dataset e2d2 --config config/exe/prepare_event_representations/e2d2.yaml
python scripts/inference_online.py --config config/exe/inference_online/e2d2.yaml
python scripts/benchmark_tap.py --gt_dir data/e2d2/231025_110210_fidget5_high_exposure --pred_dir output/inference/tap_e2d2
The ground truth is calculated from the turning speed of the fidget spinner and is provided for download. To calculate the ground truth tracks yourself, run:
python scripts/create_e2d2_fidget_spinner_gt.py
Download the penn_aviary sequence and move it to the data directory:
data/penn_aviary/
└── 231018_174107_view2
├── mask00082.png
└── seq.h5
Run the inference with:
python scripts/inference_online.py --config config/exe/inference_online/penn_aviary.yaml
We provide a 10 sample test set of EventKubric for quick evaluation. The complete dataset consists of approximately 10,000 samples.
To generate your own synthetic event data, please refer to the Data Pipeline Instructions.
We gratefully appreciate the following repositories and thank the authors for their excellent work:
If you find this work useful in your research, please consider citing:
@InProceedings{Hamann25cvpr,
author={Friedhelm Hamann and Daniel Gehrig and Filbert Febryanto and Kostas Daniilidis and Guillermo Gallego},
title={{ETAP}: Event-based Tracking of Any Point},
booktitle={{IEEE/CVF} Conf. Computer Vision and Pattern Recognition ({CVPR})},
year=2025,
}
- Research page (TU Berlin, RIP lab)
- Course at TU Berlin
- Survey paper
- List of Event-based Vision Resources
This project is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License - see the LICENSE file for details. This means you are free to share and adapt the material for non-commercial purposes, provided you give appropriate credit and indicate if changes were made.