The official PyTorch implementation of the 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) paper Social EgoMesh Estimation (SEE-ME). SEE-ME is a project focused on egocentric motion estimation. This repository provides the necessary setup instructions, training/testing pipelines, and key components for running the model.
conda create -n seeme python=3.8.18
conda activate seeme
pip install -r requirements.txt
To download and prepare the SMPL model, run:
bash prepare/download_smpl_model.sh
Model Component | Checkpoint Path |
---|---|
Interactee Only | Not yet available |
Scene Only | Not yet available |
Scene + Interactee | Not yet available |
Please refer to EgoBody and GIMO.
- Modify Line 114 in the configuration file to set the conditioning based on the checkpoint model used.
- Modify Line 71 in the configuration file to adjust the number of repetitions for testing.
To train a model refer to train.sh
.
To test a model refer to test.sh
.
We refer to evaluation metrics implemented in MLD.
Refer to render.sh
.
Contributions are welcome! Please follow these steps:
- Fork the repository.
- Create a feature branch (git checkout -b feature-name)
- Commit your changes (git commit -m "Add new feature").
- Push to the branch (git push origin feature-name). 5.Open a pull request.
@misc{scofano2024socialegomeshestimation,
title={Social EgoMesh Estimation},
author={Luca Scofano and Alessio Sampieri and Edoardo De Matteis and Indro Spinelli and Fabio Galasso},
year={2024},
eprint={2411.04598},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2411.04598},
}