Skip to content
/ ETAP Public

The official implementation of "ETAP: Event-based Tracking of Any Point" (CVPR 2025)

License

Notifications You must be signed in to change notification settings

tub-rip/ETAP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ETAP: Event-based Tracking of Any Point

Paper Data Video License

Introduction

This is the official repository for ETAP: Event-based Tracking of Any Point, accepted at CVPR 2025, by Friedhelm Hamann, Daniel Gehrig, Filbert Febryanto, Kostas Daniilidis and Guillermo Gallego.

ETAP_thumbnail

Key Features

  • The first event-only point tracking method with strong cross-dataset generalization.
  • Robust tracking in challenging conditions (high-speed motion, lighting changes).
  • Evaluation benchmark based on six event camera datasets: EDS, EC, EVIMO2, EventKubric, E2D2 and PennAviary

Example Predictions

Example 1
Example 1: Synthetic dataset EventKubric
Example 2
Example 2: Feature Tracking on EDS
Example 3
Example 3: E2D2
Example 4
Example 4: EVIMO2

Table of Contents

Quickstart Demo

The quickest way to try ETAP is using our demo:

  1. Clone the repository:

    git clone https://github.com/tub-rip/ETAP.git
    cd ETAP
  2. Download the model weights and save to weights/ETAP_v1_cvpr25.pth

  3. Download the demo example (30MB) and extract to data/demo_example

  4. Run the demo:

    python scripts/demo.py

This demo requires only basic dependencies: torch, numpy, tqdm, matplotlib, imageio, and pillow. No dataset preprocessing needed!

Installation

  1. Clone the repository:

    git clone [email protected]:tub-rip/ETAP.git
    cd ETAP
  2. Set up the environment:

    conda create --name ETAP python=3.10
    conda activate ETAP
  3. Install PyTorch (choose a command compatible with your CUDA version from the PyTorch website), e.g.:

    conda install pytorch torchvision pytorch-cuda=12.1 -c pytorch -c nvidia
  4. Install other dependencies:

    pip install -r requirements.txt

Model Selection

Download the pre-trained model and move it into the folder <repo-root>/weights/.

To reproduce the paper results, use the model ETAP_v1_cvpr25.pth.

Evaluation Tasks and Datasets

Evaluation: Feature Tracking (EDS, EC)

Preparations

Download EDS (Prophesee Gen3 640 x 480 px)

The four evaluation sequences of the "Event-aided Direct Sparse Odometry Dataset" (EDS) can be downloaded in two ways:

Option 1

Download the four evaluation sequences of the "Event-aided Direct Sparse Odometry Dataset" (EDS) from the official web page:

  • 01_peanuts_light
  • 02_rocket_earth_light
  • 08_peanuts_running
  • 14_ziggy_in_the_arena

Choose the archive file option, which contains the events as an hdf5 file. Place all sequences in a common folder.

Option 2: Use our download script:

bash scripts/download_eds.sh

We also use the calibration data provided by EDS. No action is required, as it is included in this repository at config/misc/eds/calib.yaml. This is the same file as in the 00_calib results from the official source.

The evaluation was introduced in DDFT. As with the calibration data, we have hardcoded the ground truth tracks at /config/misc/eds/gt_tracks, so no additional steps are necessary. If you are interested in how the tracks are created, please refer to the DDFT repository.

Create a symbolic link to your data root into <repository-root>/data, or alternatively you can change the paths in the config files. The setup should look something like this:

data/eds/
   ├── 01_peanuts_light
   │   └── events.h5
   ├── 02_rocket_earth_light
   │   └── events.h5
   ├── 08_peanuts_running
   │   └── events.h5
   └── 14_ziggy_in_the_arena
      └── events.h5
Download EC (DAVIS240C 240 x 180 px)

Download the five evaluation sequences of the "Event Camera Dataset" (EC) from the official source. Download the option Text (zip). Unzip the sequences into a folder structure like this:

data/ec/
   ├── boxes_rotation
   │   ├── calib.txt
   │   ├── events.txt
   │   ├── groundtruth.txt
   │   ├── images
   │   ├── images.txt
   │   └── imu.txt
   ├── boxes_translation
   │   ├── events.txt
   │   ├── ...
   ├── shapes_6dof
   │   ├── events.txt
   │   ├── ...
   ├── shapes_rotation
   │   ├── events.txt
   │   ├── ...
   └── shapes_translation
       ├── events.txt
       ├── ...

As with EDS, the ground truth tracks are from the evaluation introduced in DDFT but we have included them at config/misc/ec/gt for convenience.

Preprocessing

Preprocess the data by transforming the raw events into event stacks with the following commands:

# For EDS dataset
python scripts/prepare_event_representations.py --dataset eds --config config/exe/prepare_event_representations/eds.yaml

# For EC dataset
python scripts/prepare_event_representations.py --dataset ec --config config/exe/prepare_event_representations/ec.yaml

Inference

Run the tracking inference with:

python scripts/inference_online.py --config config/exe/inference_online/feature_tracking.yaml

Evaluation

Run the benchmarking script to evaluate the tracking results:

python scripts/benchmark_feature_tracking.py feature_tracking_eds_ec

Evaluation: EVIMO2 (Samsung DVS Gen3 640 x 480 px)

Preparations

  1. Download the required EVIMO2 sequences from the official source. You only need Motion Segmentation / Object Recognition sequences for the samsung_mono camera in .npz format (2.4GB). Unzip them and move them into the data directory.

  2. Download the precomputed tracks here and merge them into the data directory.

The result should look like this:

data/evimo/
└── samsung_mono
    └── imo
        └── eval
            ├── scene13_dyn_test_00_000000
            │   ├── dataset_classical.npz
            │   ├── dataset_depth.npz
            │   ├── dataset_events_p.npy
            │   ├── dataset_events_t.npy
            │   ├── dataset_events_xy.npy
            │   ├── dataset_info.npz
            │   ├── dataset_mask.npz
            │   └── dataset_tracks.h5
            ├── scene13_dyn_test_05_000000
            │   ├── dataset_classical.npz
            ... ...
  1. Precompute the event stacks:
python scripts/prepare_event_representations.py --dataset evimo2 --config config/exe/prepare_event_representations/evimo2.yaml

Inference & Evaluation

Run inference and evaluation with a single command:

python scripts/inference_offline.py --config config/exe/inference_offline/evimo2.yaml

Ground Truth Track Generation (Optional)

If you want to generate the point tracks yourself instead of using the precomputed ones:

python scripts/create_evimo2_track_gt.py --config config/misc/evimo2/val_samples.csv --data_root data/evimo2

Evaluation: EventKubric (synthetic 512 x 512 px)

Preparations

  1. Download the event_kubric test set and move it to the data directory:
data/event_kubric
└── test
    ├── sample_000042
    │   ├── annotations.npy
    │   ├── data.hdf5
    │   ├── data_ranges.json
    │   ├── events
    │   │   ├── 0000000000.npz
    │   │   ├── 0000000001.npz
    │   │   ...
    │   ├── events.json
    │   └── metadata.json
    ├── sample_000576
    │   ├── annotations.npy
    │   ...
    ...
  1. Prepare the event stacks:
python scripts/prepare_event_representations.py --dataset event_kubric --config config/exe/prepare_event_representations/event_kubric.yaml

Inference & Evaluation

Run inference and evaluation with a single command:

python scripts/inference_offline.py --config config/exe/inference_offline/event_kubric.yaml

Evaluation: E2D2 (SilkyEvCam 640 x 480 px)

Preparations

  1. Download the E2D2 fidget spinner sequence and move it to the data directory:
data/e2d2/
└── 231025_110210_fidget5_high_exposure
    ├── gt_positions.npy
    ├── gt_timestamps.npy
    ├── queries.npy
    └── seq.h5
  1. Prepare the event stacks:
python scripts/prepare_event_representations.py --dataset e2d2 --config config/exe/prepare_event_representations/e2d2.yaml

Inference

python scripts/inference_online.py --config config/exe/inference_online/e2d2.yaml

Evaluation

python scripts/benchmark_tap.py --gt_dir data/e2d2/231025_110210_fidget5_high_exposure --pred_dir output/inference/tap_e2d2

Ground Truth Generation (Optional)

The ground truth is calculated from the turning speed of the fidget spinner and is provided for download. To calculate the ground truth tracks yourself, run:

python scripts/create_e2d2_fidget_spinner_gt.py

Evaluation: PennAviary (SilkyEvCam 640 x 480 px, Qualitative)

Preparations

Download the penn_aviary sequence and move it to the data directory:

data/penn_aviary/
└── 231018_174107_view2
    ├── mask00082.png
    └── seq.h5

Inference

Run the inference with:

python scripts/inference_online.py --config config/exe/inference_online/penn_aviary.yaml

Synthetic Data Generation (EventKubric)

We provide a 10 sample test set of EventKubric for quick evaluation. The complete dataset consists of approximately 10,000 samples.

To generate your own synthetic event data, please refer to the Data Pipeline Instructions.

Acknowledgements

We gratefully appreciate the following repositories and thank the authors for their excellent work:

Citation

If you find this work useful in your research, please consider citing:

@InProceedings{Hamann25cvpr,
  author={Friedhelm Hamann and Daniel Gehrig and Filbert Febryanto and Kostas Daniilidis and Guillermo Gallego},
  title={{ETAP}: Event-based Tracking of Any Point},
  booktitle={{IEEE/CVF} Conf. Computer Vision and Pattern Recognition ({CVPR})},
  year=2025,
}

Additional Resources

License

This project is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License - see the LICENSE file for details. This means you are free to share and adapt the material for non-commercial purposes, provided you give appropriate credit and indicate if changes were made.

About

The official implementation of "ETAP: Event-based Tracking of Any Point" (CVPR 2025)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published