中文版 | English
A ComfyUI custom node wrapper for LightX2V, enabling modular video generation with advanced optimization features.
- Modular Configuration System: Separate nodes for each aspect of video generation
- Text-to-Video (T2V) and Image-to-Video (I2V): Support for both generation modes
- Advanced Optimizations:
- TeaCache acceleration (up to 3x speedup)
- Quantization support (int8, fp8)
- Memory optimization with CPU offloading
- Lightweight VAE options
- LoRA Support: Chain multiple LoRA models for customization
- Multiple Model Support: wan2.1, hunyuan architectures
- Clone this repository with submodules into your ComfyUI's
custom_nodes
directory:
cd ComfyUI/custom_nodes
git clone --recursive https://github.com/gaclove/ComfyUI-Lightx2vWrapper.git
If you already cloned without submodules, initialize them:
cd ComfyUI-Lightx2vWrapper
git submodule update --init --recursive
- Install dependencies:
cd ComfyUI-Lightx2vWrapper
# Install lightx2v submodule dependencies
pip install -r lightx2v/requirements.txt
# Install ComfyUI wrapper dependencies
pip install -r requirements.txt
- Download models and place them in
ComfyUI/models/lightx2v/
directory
Basic inference configuration for video generation.
- Inputs: model, task_type, inference_steps, seed, cfg_scale, width, height, video_length, fps
- Output: Base configuration object
Feature caching acceleration configuration.
- Inputs: enable, threshold (0.0-1.0), use_ret_steps
- Output: TeaCache configuration
- Note: Lower threshold = more speedup (0.1 ~2x, 0.2 ~3x)
Model quantization settings for memory efficiency.
- Inputs: dit_precision, t5_precision, clip_precision, backend, sensitive_layers_precision
- Output: Quantization configuration
- Backends: Auto-detected (vllm, sgl, q8f)
Memory management strategies.
- Inputs: optimization_level, attention_type, enable_rotary_chunking, cpu_offload, unload_after_generate
- Output: Memory optimization configuration
VAE optimization options.
- Inputs: use_tiny_vae, use_tiling_vae
- Output: VAE configuration
Load and chain LoRA models.
- Inputs: lora_name, strength (0.0-2.0), lora_chain (optional)
- Output: LoRA chain configuration
Combines all configuration modules into a single configuration.
- Inputs: All configuration types (optional)
- Output: Combined configuration object
Main inference node for video generation.
- Inputs: combined_config, prompt, negative_prompt, image (optional), audio (optional)
- Outputs: Generated video frames
- Create LightX2V Inference Config (task_type: "t2v")
- Use LightX2V Config Combiner
- Connect to LightX2V Modular Inference with text prompt
- Save video output
- Load input image
- Create LightX2V Inference Config (task_type: "i2v")
- Add LightX2V TeaCache (threshold: 0.26)
- Add LightX2V Memory Optimization
- Combine configs with LightX2V Config Combiner
- Run LightX2V Modular Inference
- Create base configuration
- Load LoRA with LightX2V LoRA Loader
- Chain multiple LoRAs if needed
- Combine all configs
- Run inference
Download models from: https://huggingface.co/lightx2v
Models should be placed in:
ComfyUI/models/lightx2v/
├── Wan2.1-I2V-14B-720P-xxx/ # Main model checkpoints
├── Wan2.1-I2V-14B-480P-xxx/ # Main model checkpoints
├── loras/ # LoRA models
- Start with default settings and adjust based on your hardware
- Use TeaCache with threshold 0.1-0.2 for significant speedup
- Enable memory optimization if running on limited VRAM
- Quantization can reduce memory usage but may affect quality
- Chain multiple LoRAs for complex style combinations
- Out of Memory: Enable memory optimization or use quantization
- Slow Generation: Enable TeaCache or reduce inference steps
- Model Not Found: Check model paths in
ComfyUI/models/lightx2v/
Example workflow JSON files are provided in the examples/
directory:
wan_i2v.json
- Basic image-to-videowan_i2v_with_distill_lora.json
- I2V with distillation LoRAwan_t2v_with_distill_lora.json
- T2V with distillation LoRA
We welcome community contributions! Before submitting code, please ensure you follow these steps:
pip install ruff pre-commit
Before committing code, run the following command:
pre-commit run --all-files
This will automatically check code formatting, syntax errors, and other code quality issues.
- Fork this repository
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Create a Pull Request