|
| 1 | +# DiffSynth Studio |
| 2 | + |
| 3 | +[DiffSynth-Studio](https://github.com/modelscope/DiffSynth-Studio) is an open-source diffusion model engine launched by [ModelScope](https://modelscope.cn/), focusing on image and video style transfer and generation tasks. By optimizing architectural designs (such as text encoders, UNet, VAE, and other components), it significantly enhances computational performance while maintaining compatibility with open-source community models, providing users with an efficient and flexible creative tool. |
| 4 | + |
| 5 | +DiffSynth Studio supports various diffusion models, including Wan-Video, StepVideo, HunyuanVideo, CogVideoX, FLUX, ExVideo, Kolors, Stable Diffusion 3, and more. |
| 6 | + |
| 7 | + |
| 8 | + |
| 9 | +You can use DiffSynth Studio to quickly train Diffusion models while using SwanLab for experiment tracking and visualization. |
| 10 | + |
| 11 | +[[toc]] |
| 12 | + |
| 13 | +## Preparation |
| 14 | + |
| 15 | +**1. Clone the Repository and Set Up the Environment** |
| 16 | + |
| 17 | +```bash |
| 18 | +git clone https://github.com/modelscope/DiffSynth-Studio.git |
| 19 | +cd DiffSynth-Studio |
| 20 | +pip install -e . |
| 21 | +pip install swanlab |
| 22 | +``` |
| 23 | + |
| 24 | +**2. Prepare the Dataset** |
| 25 | + |
| 26 | +The dataset for DiffSynth Studio needs to be structured in the following format. For example, place the image data in the `data/dog` directory: |
| 27 | + |
| 28 | +```bash |
| 29 | +data/dog/ |
| 30 | +└── train |
| 31 | + ├── 00.jpg |
| 32 | + ├── 01.jpg |
| 33 | + ├── 02.jpg |
| 34 | + ├── 03.jpg |
| 35 | + ├── 04.jpg |
| 36 | + └── metadata.csv |
| 37 | +``` |
| 38 | + |
| 39 | +The `metadata.csv` file should be structured as follows: |
| 40 | + |
| 41 | +```csv |
| 42 | +file_name,text |
| 43 | +00.jpg,A small dog |
| 44 | +01.jpg,A small dog |
| 45 | +02.jpg,A small dog |
| 46 | +03.jpg,A small dog |
| 47 | +04.jpg,A small dog |
| 48 | +``` |
| 49 | + |
| 50 | +**3. Prepare the Model** |
| 51 | + |
| 52 | +Here, we use the Kolors model as an example. Download the model weights and VAE weights: |
| 53 | + |
| 54 | +```bash |
| 55 | +modelscope download --model=Kwai-Kolors/Kolors --local_dir models/kolors/Kolors |
| 56 | +modelscope download --model=AI-ModelScope/sdxl-vae-fp16-fix --local_dir models/kolors/sdxl-vae-fp16-fix |
| 57 | +``` |
| 58 | + |
| 59 | +## Setting SwanLab Parameters |
| 60 | + |
| 61 | +When running the training script, add `--use_swanlab` to record the training process on the SwanLab platform. |
| 62 | + |
| 63 | +If you need offline recording, you can add `--swanlab_mode "local"`. |
| 64 | + |
| 65 | +```bash {3,4} |
| 66 | +CUDA_VISIBLE_DEVICES="0" python examples/train/kolors/train_kolors_lora.py \ |
| 67 | +... |
| 68 | +--use_swanlab \ |
| 69 | +--swanlab_mode "cloud" |
| 70 | +``` |
| 71 | + |
| 72 | +## Starting the Training |
| 73 | + |
| 74 | +Use the following command to start the training and record hyperparameters, training logs, loss curves, and other information using SwanLab: |
| 75 | + |
| 76 | +```bash {11,12} |
| 77 | +CUDA_VISIBLE_DEVICES="0" python examples/train/kolors/train_kolors_lora.py \ |
| 78 | +--pretrained_unet_path models/kolors/Kolors/unet/diffusion_pytorch_model.safetensors \ |
| 79 | +--pretrained_text_encoder_path models/kolors/Kolors/text_encoder \ |
| 80 | +--pretrained_fp16_vae_path models/kolors/sdxl-vae-fp16-fix/diffusion_pytorch_model.safetensors \ |
| 81 | +--dataset_path data/dog \ |
| 82 | +--output_path ./models \ |
| 83 | +--max_epochs 10 \ |
| 84 | +--center_crop \ |
| 85 | +--use_gradient_checkpointing \ |
| 86 | +--precision "16-mixed" \ |
| 87 | +--use_swanlab \ |
| 88 | +--swanlab_mode "cloud" |
| 89 | +``` |
| 90 | + |
| 91 | + |
| 92 | + |
| 93 | + |
| 94 | + |
| 95 | +## Additional Notes |
| 96 | + |
| 97 | +If you want to customize SwanLab project names, experiment names, and other parameters, you can: |
| 98 | + |
| 99 | +**1. Text-to-Image Tasks** |
| 100 | + |
| 101 | +In the `DiffSynth-Studio/diffsynth/trainers/text_to_image.py` file, locate the `swanlab_logger` variable and modify the `project` and `name` parameters: |
| 102 | + |
| 103 | +```python {6-7} |
| 104 | +if args.use_swanlab: |
| 105 | + from swanlab.integration.pytorch_lightning import SwanLabLogger |
| 106 | + swanlab_config = {"UPPERFRAMEWORK": "DiffSynth-Studio"} |
| 107 | + swanlab_config.update(vars(args)) |
| 108 | + swanlab_logger = SwanLabLogger( |
| 109 | + project="diffsynth_studio", |
| 110 | + name="diffsynth_studio", |
| 111 | + config=swanlab_config, |
| 112 | + mode=args.swanlab_mode, |
| 113 | + logdir=args.output_path, |
| 114 | + ) |
| 115 | + logger = [swanlab_logger] |
| 116 | +``` |
| 117 | + |
| 118 | +**2. Wan-Video Text-to-Video Tasks** |
| 119 | + |
| 120 | +In the `DiffSynth-Studio/examples/wanvideo/train_wan_t2v.py` file, locate the `swanlab_logger` variable and modify the `project` and `name` parameters: |
| 121 | + |
| 122 | +```python {6-7} |
| 123 | +if args.use_swanlab: |
| 124 | + from swanlab.integration.pytorch_lightning import SwanLabLogger |
| 125 | + swanlab_config = {"UPPERFRAMEWORK": "DiffSynth-Studio"} |
| 126 | + swanlab_config.update(vars(args)) |
| 127 | + swanlab_logger = SwanLabLogger( |
| 128 | + project="wan", |
| 129 | + name="wan", |
| 130 | + config=swanlab_config, |
| 131 | + mode=args.swanlab_mode, |
| 132 | + logdir=args.output_path, |
| 133 | + ) |
| 134 | + logger = [swanlab_logger] |
| 135 | +``` |
0 commit comments