Pruna is a model optimization framework built for developers, enabling you to deliver faster, more efficient models with minimal overhead. It provides a comprehensive suite of compression algorithms including caching, quantization, pruning, distillation and compilation techniques to make your models:
- Faster: Accelerate inference times through advanced optimization techniques
- Smaller: Reduce model size while maintaining quality
- Cheaper: Lower computational costs and resource requirements
- Greener: Decrease energy consumption and environmental impact
The toolkit is designed with simplicity in mind - requiring just a few lines of code to optimize your models. It supports various model types including LLMs, Diffusion and Flow Matching Models, Vision Transformers, Speech Recognition Models and more.
Pruna is currently available for installation on Linux, MacOS and Windows. However, some algorithms impose restrictions on the operating system and might not be available on all platforms.
Before installing, ensure you have:
- Python 3.9 or higher
- Optional: CUDA toolkit for GPU support
Pruna is available on PyPI, so you can install it using pip:
pip install pruna
You can also install Pruna directly from source by cloning the repository and installing the package in editable mode:
git clone https://github.com/PrunaAI/pruna.git
cd pruna
pip install -e .
Getting started with Pruna is easy-peasy pruna-squeezy!
First, load any pre-trained model. Here's an example using Stable Diffusion:
from diffusers import StableDiffusionPipeline
base_model = StableDiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5")
Then, use Pruna's smash
function to optimize your model. Pruna provides a variety of different optimization algorithms, allowing you to combine different algorithms to get the best possible results. You can customize the optimization process using SmashConfig
:
from pruna import smash, SmashConfig
# Create and smash your model
smash_config = SmashConfig()
smash_config["cacher"] = "deepcache"
smash_config["compiler"] = "stable_fast"
smashed_model = smash(model=base_model, smash_config=smash_config)
Your model is now optimized and you can use it as you would use the original model:
smashed_model("An image of a cute prune.").images[0]
You can then use our evaluation interface to measure the performance of your model:
from pruna.evaluation.task import Task
from pruna.evaluation.evaluation_agent import EvaluationAgent
from pruna.data.pruna_datamodule import PrunaDataModule
datamodule = PrunaDataModule.from_string("LAION256")
datamodule.limit_datasets(10)
task = Task("image_generation_quality", datamodule=datamodule)
eval_agent = EvaluationAgent(task)
eval_agent.evaluate(smashed_model)
This was the minimal example, but you are looking for the maximal example? You can check out our documentation for an overview of all supported algorithms as well as our tutorials for more use-cases and examples.
Since Pruna offers a broad range of optimization algorithms, the following table provides a high-level overview of all methods available in Pruna. For a detailed description of each algorithm, have a look at our documentation.
Technique | Description | Speed | Memory | Quality |
---|---|---|---|---|
batcher |
Groups multiple inputs together to be processed simultaneously, improving computational efficiency and reducing processing time. | ✅ | ❌ | ➖ |
cacher |
Stores intermediate results of computations to speed up subsequent operations. | ✅ | ➖ | ➖ |
compiler |
Optimises the model with instructions for specific hardware. | ✅ | ➖ | ➖ |
quantizer |
Reduces the precision of weights and activations, lowering memory requirements. | ✅ | ✅ | ❌ |
pruner |
Removes less important or redundant connections and neurons, resulting in a sparser, more efficient network. | ✅ | ✅ | ❌ |
factorizer |
Factorization batches several small matrix multiplications into one large fused operation. | ✅ | ➖ | ➖ |
kernel |
Kernels are specialized GPU routines that speed up parts of the computation. | ✅ | ➖ | ➖ |
✅ (improves), ➖ (approx. the same), ❌ (worsens)
If you can not find an answer to your question or problem in our documentation, in our FAQs or in an existing issue, we are happy to help you! You can either get help from the Pruna community on Discord, join our Office Hours or open an issue on GitHub.
The Pruna package was made with 💜 by the Pruna AI team and our amazing contributors. Contribute to the repository to become part of the Pruna family!
If you use Pruna in your research, feel free to cite the project! 💜
@misc{pruna,
title = {Efficient Machine Learning with Pruna},
year = {2023},
note = {Software available from pruna.ai},
url={https://www.pruna.ai/}
}