-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replicating the results of the pre-trained models. #27
Comments
Hi, thanks for your interest! We are happy to figure out the problem together with you. Could you please provide the configures for training (opt file) and testing (scripts) for each stage? |
I just use the provided code and commands without any change. |
Hi wenhao, thanks for your info. I've checked the configures and corrected the scripts in our README. We used Hope you will find it useful. |
Hi, after setting
|
I meet the same problem and get the same results with @weihaosky . |
No. I still cannot replicate the results. |
Have you solved the problem now? @weihaosky @aszxnm |
the same for me...
|
Have you solved the problem now? @weihaosky @aszxnm Thanks. |
No. I still cannot replicate the results. |
Thank you all for your attempts to replicate the results. I just got some time to re-train the masked-transformer and res-transformer using our released code. Here are some results for your reference: I used the rvq checkpoint I obtained here #27 (comment). I used the following scripts to train the m-trans and r-trans:
evaluation script:
The above results are obtained from these scripts without any modification to this code base. The replicate experiments were done on a single RTX 2080 Ti GPU, torch==1.7.1. For the processed dataset cloned from the original HumanML3D project, please send inquiry to [email protected] or [email protected] |
#27 (comment) |
Any other suggestions? Thanks! |
Scale difference means that the scale of AMASS(HumanML3D) and kit data is different. Take an example, inch vs cm. "the MPJPE results on these two datasets differ by several orders of magnitude" should not be an issue. |
Oh, thanks! I checked the range of means from these two datasets, the mean of KIT datasets is three orders of magnitude higher than that of HumanML3D on average. Thank you! @Murrol |
Hello, I have some doubts regarding the reproduction of results. Currently, the minimum FID value I am getting while replicating the results on t2m is 0.059, and other metrics are normal. However, after adding certain modules, the FID value is 0.047, but in your paper, the FID value is 0.045. I would like to know if this is an improvement? I am a bit unclear whether my modifications are effective, or if, in fact, my reproduced results are not correct. |
My rvq replicated FID is also 0.032, has it been solved? |
Same question, my rvq replicated FID is similar, (0.033). |
Thanks for releasing this amazing work!
However, I cannot replicate the results of the pre-trained models using the provided code.
The results after training with the provided code are
May I ask where is the problem? Many thanks!
The text was updated successfully, but these errors were encountered: