LightX2V is an advanced lightweight video generation inference framework engineered to deliver efficient, high-performance video synthesis solutions. This unified platform integrates multiple state-of-the-art video generation techniques, supporting diverse generation tasks including text-to-video (T2V) and image-to-video (I2V). X2V represents the transformation of different input modalities (X, such as text or images) into video output (V).
For comprehensive usage instructions, please refer to our documentation: English Docs | δΈζζζ‘£
- β HunyuanVideo
- β Wan2.1
- β SkyReels-V2-DF
- β CogVideoX1.5-5B-T2V
- β Wan2.1-T2V-1.3B-Lightx2v
- β Wan2.1-T2V-14B-Lightx2v
- β Wan2.1-I2V-14B-480P-Lightx2v
- β Wan2.1-I2V-14B-720P-Lightx2v
- β Wan2.1-T2V-14B-StepDistill-CfgDistill-Lightx2v
- β Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-Lightx2v
- β Wan2.1-I2V-14B-720P-StepDistill-CfgDistill-Lightx2v
π Follow our HuggingFace page for the latest model releases from our team.
We provide multiple frontend interface deployment options:
- π¨ Gradio Interface: Clean and user-friendly web interface, perfect for quick experience and prototyping
- π― ComfyUI Interface: Powerful node-based workflow interface, supporting complex video generation tasks
- π Windows One-Click Deployment: Convenient deployment solution designed for Windows users, featuring automatic environment configuration and intelligent parameter optimization
π‘ Recommended Solutions:
- First-time Users: We recommend the Windows one-click deployment solution
- Advanced Users: We recommend the ComfyUI interface for more customization options
- Quick Experience: The Gradio interface provides the most intuitive operation experience
- π₯ SOTA Inference Speed: Achieve ~20x acceleration via step distillation and system optimization (single GPU)
- β‘οΈ Revolutionary 4-Step Distillation: Compress original 40-50 step inference to just 4 steps without CFG requirements
- π οΈ Advanced Operator Support: Integrated with cutting-edge operators including Sage Attention, Flash Attention, Radial Attention, q8-kernel, sgl-kernel, vllm
- π‘ Breaking Hardware Barriers: Run 14B models for 480P/720P video generation with only 8GB VRAM + 16GB RAM
- π§ Intelligent Parameter Offloading: Advanced disk-CPU-GPU three-tier offloading architecture with phase/block-level granular management
- βοΈ Comprehensive Quantization: Support for
w8a8-int8
,w8a8-fp8
,w4a4-nvfp4
and other quantization strategies
- π Smart Feature Caching: Intelligent caching mechanisms to eliminate redundant computations
- π Parallel Inference: Multi-GPU parallel processing for enhanced performance
- π± Flexible Deployment Options: Support for Gradio, service deployment, ComfyUI and other deployment methods
- ποΈ Dynamic Resolution Inference: Adaptive resolution adjustment for optimal generation quality
- ποΈ Video Frame Interpolation: RIFE-based frame interpolation for smooth frame rate enhancement
For detailed performance metrics and comparisons, please refer to our benchmark documentation.
Detailed Service Deployment Guide β
- Model Quantization - Comprehensive guide to quantization strategies
- Feature Caching - Intelligent caching mechanisms
- Attention Mechanisms - State-of-the-art attention operators
- Parameter Offloading - Three-tier storage architecture
- Parallel Inference - Multi-GPU acceleration strategies
- Changing Resolution Inference - U-shaped resolution strategy
- Step Distillation - 4-step inference technology
- Video Frame Interpolation - Base on the RIFE technology
- Low-Resource Deployment - Optimized 8GB VRAM solutions
- Low-Latency Deployment - Ultra-fast inference optimization
- Gradio Deployment - Web interface setup
- Service Deployment - Production API service deployment
- Lora Model Deployment - Flexible Lora deployment
We maintain code quality through automated pre-commit hooks to ensure consistent formatting across the project.
Tip
Setup Instructions:
- Install required dependencies:
pip install ruff pre-commit
- Run before committing:
pre-commit run --all-files
We appreciate your contributions to making LightX2V better!
We extend our gratitude to all the model repositories and research communities that inspired and contributed to the development of LightX2V. This framework builds upon the collective efforts of the open-source community.
If you find LightX2V useful in your research, please consider citing our work:
@misc{lightx2v,
author = {LightX2V Contributors},
title = {LightX2V: Light Video Generation Inference Framework},
year = {2025},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/ModelTC/lightx2v}},
}
For questions, suggestions, or support, please feel free to reach out through:
- π GitHub Issues - Bug reports and feature requests
- π¬ GitHub Discussions - Community discussions and Q&A