Tianshuo Xu1,*, Zhifei Chen1,*, Leyi Wu1, Hao Lu1, Yuying Chen2, Lihui Jiang2, Bingbing Liu2, Yingcong Chen1,3,†
1HKUST(GZ), 2Noah's Ark Lab, 3HKUST
* Equal Contribution, † Corresponding Author
We introduce Motion Dreamer, a two-stage video generation framework that decouples motion reasoning from high-fidelity video synthesis, addressing the challenge of producing physically coherent videos.
Tasks | Status |
---|---|
Release the collected driving data | [ ] |
Release training/evaluation code | [ ] |
Release the official model | [ ] |