Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about details of t2m evaluation and release of baseline script #103

Open
PerfectBlueFeynman opened this issue Oct 29, 2024 · 0 comments

Comments

@PerfectBlueFeynman
Copy link

PerfectBlueFeynman commented Oct 29, 2024

Dear authors,

Thank you for the amazing work on the new, rich motion-text datasets.

I'm trying to reproduce the evaluation results using Motion-X on several models you listed in the paper.

May I ask how you handle frame-level pose text with word and pos tagging in t2m text encoder?
Do you use posetext or semantic label for evaluation?
For cross-dataset comparison with HumanML3D, do you leave HumanML3D to 20 fps or matching the fps of Motion-x and humanml3d (30 vs 20 fps)?
How do you setup the parameters for t2m matching model training on Motion-X?
Why is the testing results on Humanml3d with MLD trained on Humanml3d is different than their original paper?

And you mentioned in previous issues (#33 (comment)) that you're about to release the baseline script, may I ask the estimated date for this?

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant