Quang Nguyen • Tri Le • Baoru Huang • Minh Nhat Vu • Ngan Le • Thieu Vo • Anh Nguyen
The website and the preprint will be publish soon!
Follow these steps to install the GraspMAS framework:
-
Clone recursively:
git clone https://github.com/Fsoft-AIC/TCM.git cd TCM -
Prepare environment:
conda create -n graspmas python=3.9 -y conda activate graspmas conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkit conda install pytorch==2.2.0 torchvision==0.17.0 torchaudio==2.2.0 pytorch-cuda=11.8 -c pytorch -c nvidia pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py39_cu113_pyt1110/download.html pip install -r requirements.txt -
Quick start:
model = CondAMamba(
in_channels=163,
d_model=512,
d_cond=128,
n_layer=8,
num_frames=150,
num_joints=24,
device="cuda",
use_pe=1,
).to("cuda")
x = torch.rand(10, 150, 163).to("cuda")
t = torch.randint(0, 1000, (10,), device=x.device).long()
modal_emb = torch.rand(10, 150, 128).to("cuda")
o = model(x, t, modal_emb)
_param_count = sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f"Param count: {_param_count}")
print(o.shape)
# print(model)
print(model.final_layer.linear.weight.dtype)
Traning and ValidatingTo train the model on mudic2dance task, please run:
bash train_music2dance.shTo validate the model's performance, please run:
bash eval_music2dance.shTo infer a single sample, please run:
bash infer_music2dance.sh@article{nguyen2025learning,
title={Learning Human Motion with Temporally Conditional Mamba},
author={Nguyen, Quang and Le, Tri and Huang, Baoru and Vu, Minh Nhat and Le, Ngan and Vo, Thieu and Nguyen, Anh},
journal={arXiv preprint arXiv:2510.12573},
year={2025}
}