Curation of research papers and datasets related to 3D garment digitization and simulation
-
- Classical Cloth Simulation
- Collision Handling and Contact Friction Modeling
- Neural Cloth Simulation
- DL for Simulation
- Inverse Cloth Simulation
- Avatar Generation
- Garment Generation
- Dynamic Human Reconstruction from Multiview Video
- Dynamic Human Reconstruction from Monocular Video
- Garment Reconstruction from Monocular Video
- Garment Reconstruction from Multiview Video
- Panel Based Garment Representation
- Clothed Human Reconstruction from Monocular Image or Video
- Learning Clothed Human Deformation from 3D scans
- Garment Retargetting
- Virtual Try On
- Physics Based Animation
- Cloth Simulation: 1 2
- Linear Implicit Solver
- FEM Simulation of 3D Deformable Solids: 1 2
- DiffCloth: Differentiable Cloth Simulation with Dry Frictional Contact
Yifei Li, Tao Du, Kui Wu, Jie Xu, Wojciech Matusik
ACM TOG, 2022
- Incremental Potential Contact: Intersection- and Inversion-free, Large-Deformation Dynamics
Minchen Li, Zachary Ferguson, Teseo Schneider, Timothy Langlois, Denis Zorin, Daniele Panozzo, Chenfanfu Jiang, Danny M. Kaufman
SIGGRAPH, 2020
- An Implicit Frictional Contact Solver for Adaptive Cloth Simulation
Jie Li, Gilles Daviet, Rahul Narain, Florence Bertails-Descoubes, Matthew Overby, George Brown, Laurence Boissieux
SIGGRAPH, 2018 - Inverse Elastic Shell Design with Contact and Friction
Mickaël Ly, Romain Casati, Florence Bertails-Descoubes, Mélina Skouras, Laurence Boissieux
SIGGRAPH Asia, 2018 - I-Cloth: Incremental Collision Handling for GPU-Based Interactive Cloth Simulation
Min Tang , Tongtong Wang, Zhongyuan Liu, Ruofeng Tong, and Dinesh Manocha
SIGGRAPH Asia, 2018
- Implicit Contact Handling for Deformable Objects
Miguel A. Otaduy, Rasmus Tamstorf, Denis Steinemann, Markus Gross
EUROGRAPHICS, 2009
- Robust Treatment of Collisions, Contact and Friction for Cloth Animation
Robert Bridson, Ronald Fedkiw, John Anderson
SIGGRAPH, 2002
- Bayesian Differentiable Physics for Cloth Digitalization
Deshan Gong, Ningtao Mao, He Wang
CVPR, 2024 - A Neural-Network-Based Approach for Loose-Fitting Clothing
YONGXU JIN, DALTON OMENS, ZHENGLIN GENG, JOSEPH TERAN, ABISHEK KUMAR, KENJI TASHIRO, RONALD FEDKIW
ArXiv, 2024
- Data-Free Learning of Reduced-Order Kinematics
Nicholas Sharp, Cristian Romero, Alec Jacobson, Etienne Vouga, Paul Kry, David I.W. Levin, Justin Solomon
SIGGRAPH '23: ACM SIGGRAPH 2023 Conference Proceedings - NeuralClothSim: Neural Deformation Fields Meet the Kirchhoff-Love Thin Shell Theory
NAVAMI KAIRANDA, MARC HABERMANN, CHRISTIAN THEOBALT, and VLADISLAV GOLYANIK ArXiv, 2023
-
SENC: Handling Self-collision in Neural Cloth Simulation
Zhouyingcheng Liao, Sinan Wang, Taku Komura
ECCV 2024 -
ContourCraft: Learning to Resolve Intersections in Neural Multi-Garment Simulations
Artur Grigorev, Giorgio Becherini, Michael Black, Otmar Hilliges, Bernhard Thomaszewski
SIGGRAPH, 2024 -
GAPS: Geometry-Aware, Physics-Based, Self-Supervised Neural Garment Draping | Code
Ruochen Chen, Liming Chen, Shaifali Parashar
3DV, 2024 -
HOOD: Hierarchical Graphs for Generalized Modelling of Clothing Dynamics | Code
Artur Grigorev, Bernhard Thomaszewski, Michael J. Black, Otmar Hilliges
CVPR, 2024
- Towards Multi-Layered 3D Garments Animation
Yidi Shao, Chen Change Loy, Bo Dai
ICCV, 2023 - GenSim: Unsupervised Generic Garment Simulator
Lokender Tiwari Brojeshwar Bhowmick Sanjana Sinha
CVPR Workshop, 2023
- Neural Cloth Simulation | Code
HUGO BERTICHE, MEYSAM MADADI, and SERGIO ESCALERA
SIGGRAPH Asia 2022 - SNUG: Self-Supervised Neural Dynamic Garments | Code
Igor Santesteban, Miguel A. Otaduy, and Dan Casas IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022 (Oral)
- Learning-Based Animation of Clothing for Virtual Try-On
Igor Santesteban, Miguel A. Otaduy, Dan Casas
EUROGRAPHICS, 2019
- DiffXPBD : Differentiable Position-Based Simulation of Compliant Constraint Dynamics
Tuur Stuyck, Hsiao-yu Chen
ACM- Computer Graphics and Interactive Techniques, 2023
- Φ-SfT: Shape-from-Template with a Physics-Based Deformation Model | Code
Navami Kairanda, Edith Tretschk, Mohamed Elgharib, Christian Theobalt, Vladislav Golyanik
CVPR, 2022
- Differentiable Cloth Simulation for Inverse Problems | Code
Junbang Liang, Ming C. Lin, Vladlen Koltun
NeurIPS, 2019
- GAvatar: Animatable 3D Gaussian Avatars with Implicit Mesh Learning
Ye Yuan, Xueting Li, Yangyi Huang, Shalini De Mello, Koki Nagano, Jan Kautz, Umar Iqbal
CVPR, 2024 (Highlight)
- Dressing Avatars: Deep Photorealistic Appearance for Physically Simulated Clothing
Donglai Xiang, Timur Bagautdinov, Tuur Stuyck, Fabian Prada, Javier Romero, Weipeng Xu, Shunsuke Saito, Jingfan Guo, Breannan Smith, Takaaki Shiratori,
Yaser Sheikh, Jessica Hodgins, and Chenglei Wu
ACM Transactions on Graphics (TOG), 2022
- Design2Cloth: 3D Cloth Generation from 2D Masks
Jiali Zheng, Rolandos Alexandros Potamias, Stefanos Zafeiriou
CVPR, 2024 - Garment3DGen: 3D Garment Stylization and Texture Generation
Nikolaos Sarafianos, Tuur Stuyck, Xiaoyu Xiang, Yilei Li, Jovan Popovic, Rakesh Ranjan
ArXiv, 2024 - GarmentDreamer: 3DGS Guided Garment Synthesis with Diverse Geometry and Texture Details
Boqian Li, Xuan Li, Ying Jiang, Tianyi Xie, Feng Gao, Huamin Wang, Yin Yang, Chenfanfu Jiang
ArXiv, 2024 - WordRobe: Text-Guided Generation of Textured 3D Garments
Astitva Srivastava, Pranav Manu, Amit Raj, Varun Jampani, Avinash Sharma
ECCV, 2024
- Animatable and Relightable Gaussians for High-fidelity Human Avatar Modeling
Zhe Li, Yipengjing Sun, Zerong Zheng, Lizhen Wang, Shengping Zhang, Yebin Liu
CVPR, 2024
NOTE: Map to Canonical - PhysAvatar: Learning the Physics of Dressed 3D Avatars from Visual Observations
Yang Zheng, Qingqing Zhao, Guandao Yang, Wang Yifan, Donglai Xiang, Florian Dubost, Dmitry Lagun, Thabo Beeler, Federico Tombari, Leonidas Guibas, Gordon Wetzstein
ArXiv, 2024
- ARAH: Animatable Volume Rendering of Articulated Human SDFs
Shaofei Wang, Katja Schwarz, Andreas Geiger, Siyu Tang
ECCV, 2022 - TAVA: Template-free Animatable Volumetric Actors
Ruilong Li,Julian Tanke, Minh Vo, Michael Zollhoefer, Jürgen Gall, Angjoo Kanazawa, Christoph Lassner
ECCV, 2022
- ReLoo: Reconstructing Humans Dressed in Loose Garments from Monocular Video in the Wild
Chen Guo, Tianjian Jiang, Manuel Kaufmann, Chengwei Zheng, Julien Valentin, Jie Song, Otmar Hilliges
ECCV 2024 - GaussianAvatar: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians
Liangxiao Hu, Hongwen Zhang, Yuxiang Zhang, Boyao Zhou, Boning Liu, Shengping Zhang, Liqiang Nie
CVPR, 2024
NOTE: Map to Canonical T-pose - GaussianBody: Clothed Human Reconstruction via 3d Gaussian Splatting
Mengtian Li, Shengxiang Yao, Zhifeng Xie, Keyu Chen
ArXiv, 2024
NOTE: Map to Canonical T-pose
- REC-MV: REconstructing 3D Dynamic Cloth from Monocular Videos
Lingteng Qiu1, Guanying Chen, Jiapeng Zhou, Mutian Xu, Junle Wang,Xiaoguang Han
CVPR, 2023
NOTE: Deformation of T-pose template mesh
- PERGAMO: Personalized 3D Garments from Monocular Video
Andrés Casado-Elvira, Marc Comino Trinidad, Dan Casas
SIGGRAPH, 2022
- Deep Physics-aware Inference of Cloth Deformation for Monocular Human Performance Capture
Yue Li, Marc Habermann, Bernhard Thomaszewski, Stelian Coros, Thabo Beeler and Christian Theobalt
3DV, 2021
NOTE: Temporal deformation
- Drivable 3D Gaussian Avatars
Wojciech Zielonka, Timur Bagautdinov, Shunsuke Saito, Michael Zollhöfer, Justus Thies, Javier Romero
ArXiv
NOTE: Map to Canonical T-pose, 2023
- Dress-1-to-3: Single Image to Simulation-Ready 3D Outfit with Diffusion Prior and Differentiable Physics
Xuan Li, Chang Yu, Wenxin Du, Ying Jiang, Tianyi Xie, Yunuo Chen, Yin Yang, Chenfanfu Jiang
ArXiv
- DiffAvatar: Simulation-Ready Garment Optimization with Differentiable Simulation
Yifei Li, Hsiao-yu Chen, Egor Larionov, Nikolaos Sarafianos, Wojciech Matusik, Tuur Stuyck
CVPR, 2024 - Inverse Garment and Pattern Modeling with a Differentiable Simulator
Boyang Yu, Frederic Cordier, and Hyewon Seo
ArXiv, 2024
- Towards Garment Sewing Pattern Reconstruction from a Single Image
Lijuan Liu, Xiangyu Xu, Zhijie Lin, Jiabin Liang, Shuicheng Yan
SIGGRAPH ASIA, 2023
- NeuralTailor: Reconstructing Sewing Pattern Structures from 3D Point Clouds of Garments
Maria Korosteleva and Sung-Hee Lee \ ACM Transactions on Graphics (TOG), 2022
- Garment4D: Garment Reconstruction from Point Cloud Sequences
Fangzhou Hong, Liang Pan, Zhongang Cai, Ziwei Liu
NeurIPS, 2021
- ANIM: Accurate Neural Implicit Model for Human Reconstruction from a single RGB-D image
Marco Pesavento, Yuanlu Xu, Nikolaos Sarafianos, Robert Maier, Ziyan Wang, Chun-Han Yao, Marco Volino, Edmond Boyer, Adrian Hilton, Tony Tung
CVPR, 2024 - SiTH: Single-view Textured Human Reconstruction with Image-Conditioned Diffusion
Hsuan-I Ho, Jie Song, Otmar Hilliges
CVPR, 2024 - Garment Recovery with Shape and Deformation Priors
Ren Li, Corentin Dumery, Benoît Guillard, Pascal Fua
CVPR, 2024 - LayerNet: High-Resolution Semantic 3D Reconstruction of Clothed People
Enric Corona, Guillem Aleny`a, Gerard Pons-Moll, Francesc Moreno-Noguer
TPAMI, 2024
- Layered-Garment Net: Generating Multiple Implicit Garment Layers from a Single Image | Code
Alakh Aggarwal, Jikai Wang, Steven Hogue, Saifeng Ni, Madhukar Budagavi, Xiaohu Guo
ACCV
- CaPhy: Capturing Physical Properties for Animatable Human Avatars
Zhaoqi Su, Liangxiao Hu, Siyou Lin, Hongwen Zhang, Shengping Zhang, Justus Thies, Yebin Liu
ICCV, 2023 - CloSET: Modeling Clothed Humans on Continuous Surface with Explicit Template Decomposition
Hongwen Zhang, Siyou Lin, Ruizhi Shao, Yuxiang Zhang, Zerong Zheng, Han Huang, Yandong Guo, Yebin Liu
CVPR, 2023
- SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks
Shunsuke Saito, Jinlong Yang, Qianli Ma, Michael J. Black
CVPR, 2021
NOTE: Map to canonical t-pose
- DeepWrinkles: Accurate and Realistic Clothing Modeling
Zorah Lahner, Daniel Cremers, Tony Tung
ECCV, 2018
- ClothCap: Seamless 4D Clothing Capture and Retargeting
Gerard Pons-Moll, Sergi Pujades, Sonny Hu, and Michael J. Black
ACM- TOG, 2017
- DrapeNet: Garment Generation and Self-Supervised Draping | Code
Luca De Luigi, Ren Li, Benoit Guillard, Mathieu Salzmann, Pascal Fua
CVPR, 2023 - ISP: Multi-Layered Garment Draping with Implicit Sewing Patterns | Code
Ren Li, Benoit Guillard, Pascal Fua
NeurIPS, 2023 - ClothCombo: Modeling Inter-Cloth Interaction for Draping Multi-Layered Clothes
DOHAE LEE, Yonsei University, HYUN KANG, IN-KWON LEE
ACM TOG, 2023
- DIG: Draping Implicit Garment over the Human Body | Code
Ren Li, Benoît Guillard, Edoardo Remelli, Pascal Fua
ACCV, 2022 - ULNeF: Untangled Layered Neural Fields for Mix-and-Match Virtual Try-On
Igor Santesteban, Miguel A. Otaduy, Nils Thuerey, Dan Casas NeurIPS 2022
- Self-Supervised Collision Handling via Generative 3D Garment Models for Virtual Try-On | Code
Igor Santesteban, Nils Thuerey, Miguel A. Otaduy, Dan Casas
CVPR, 2021 - M3D-VTON: A Monocular-to-3D Virtual Try-On Network
Fuwei Zhao, Zhenyu Xie, Michael Kampffmeyer, Haoye Dong, Songfang Han, Tianxiang Zheng, Tao Zhang, Xiaodan Lian
ICCV, 2021
- Fully Convolutional Graph Neural Networks for Parametric Virtual Try-On
Raquel Vidaurre, Igor Santesteban, Elena Garces, Dan Casas
SIGGRAPH, 2020
- LayGA: Layered Gaussian Avatars for Animatable Clothing Transfer
Siyou Lin, Zhe Li, Zhaoqi Su, Zerong Zheng, Hongwen Zhang, Yebin Liu
SIGGRAPH, 2024
1. Cloth3D
This dataset contains a large collection of synthetic garment data obtained via animation SMPL models wearing different garments. They contain 6 different categories: t-shirt, top, dress, trousers, skirts and jumpsuits; each with different variation in topology such as length of sleeves, torso, legs, distance from body etc. They also provide UV mapping, allowing one to swap in any desired textures.
They provide a collection of 3D garments obtained from 3D reconstruction of images. It contains over 2000 3D garment models, spanning 10 different cloth categories. Colored 3D point cloud of garments, body pose of underlying human body, line annotations are provided.
3. MGN
426 3D scans of people with various body shapes, poses and in diverse clothing, with garment segmentations provided.
4. SIZER
Consists of 100 different subjects wearing casual clothing items in various sizes, totaling to approximately 2000 scans. This dataset includes the scans, registrations to the SMPL model, scans segmented in clothing parts, garment category and size labels.
The dataset contains 23500 samples, with ach instance of the dataset is a garment design sample, described as a sewing patterns, draped 3D models, one clean, one noisy imitating artifacts of 3D scanning process, and renders of the clean 3D model as draped over the body. Every instance is a variation of one of the 19 base garment designs.
GarmentCodeData contains 115,000 data points that cover a variety of designs in many common garment categories: tops, shirts, dresses, jumpsuits, skirts, pants, etc., fitted to a variety of body shapes.
The data is generated used a modified version of ARCSim and sequences from the CMU Motion Capture Database converted to SMPL format in SURREAL.
1. 3D Humans
3DHumans dataset provides around 180 meshes of people in diverse body shapes in various garments styles and sizes. We cover a wide variety of clothing styles, ranging from loose robed clothing, like saree (a typical South-Asian dress) to relatively tight fit clothing, like shirts and trousers. Along with the high quality geometry (mesh) and texture map, we also provide registered SMPL's parameters.
2. THuman
Dataset contains 500 high-quality human scans captured by a dense DLSR rig. For each scan, we provide the 3D model, the corresponding texture map and SMPL-X fitting parameters and corresponding meshes.
3. XHumans
Contains 233 sequences of high-quality textured scans from 20 participants, totalling about 35,500 data frames.
4. BUFF
BUFF consists of 6 subjects, 3 male and 3 female wearing 2 clothing styles: a) t-shirt and long pants and b) a soccer outfit. The sequence lengths range between 4 to 9 seconds (200-500 frames) totaling 13,632 3D scans.
Contains captures dynamic motions of 4 dresses, 28 lower, 30 upper, and 32 outer garments. For each garment, we also provide its canonical template mesh to benefit the future human clothing study.
6. MultiHuman
Contains 453 high-quality 3D human scans with raw obj mesh files and texture maps. Each scan contains 1-3 persons.
4DHumanOutfit is a new dataset of 4D human motion sequences, sampled densely in space and time, with 20 actors, dressed in 7 outfits each, and performing 11 motions exhibiting large displacements in each outfit.