Skip to content

darkshapes/divisor

Repository files navigation

Divisor

Hands-on procedural generation.

Divisor is a framework enabling flexible media creation using advanced diffusion models like Flux and MMaDA. Developers, researchers, and artists gain precise control over low-level generative processes using off-the-shelf computers, making experimentation with neural networks faster and easier than ever.

Features:

  • Multimodal Creation - Actively sculpt any content such as text and images.
  • Robust Versioning – Pause, resume, save, or restore states with exacting reproducibility and reversibility.
  • Private & Safe - Compartmentalized and local first, so data never leaves your workspace.
  • Fine‑Grained Noise & Variation Controls – Branch variations to create diverse and consistent inspiration.
  • Integration with External Resources – Start quickly with batteries included: models, adapters, and MIR specs.

Tech Specs:

  • Manual Timestep Control – Step-by-step processing of dynamic prompts, layer‑wise manipulations, and on‑the‑fly parameter changes.
  • Extensible Prompt Engineering – Dedicated multimodal prompting, system messages, and automatic parsing for LLM‑driven results.
  • Model‑Agnostic Architecture – Unified API abstraction allows interchangeable custom LoRA and autoencoders.
  • User‑Facing Interfaces – CLI and Gradio interfaces ready to use or attach to other apps.
  • Sensible Python Engineering - Uses modern tooling with minimal dependences

Requires:

Windows/MacOS/Linux device

Nvidia graphics card or M-series chip with 8GB+ VRAM. AMD support untested.

UV

Git (Windows 10/11)

Install:

git clone https://github.com/darkshapes/divisor
cd divisor
uv sync --dev

Linux/Macos

source .venv/bin/activate

Windows:

Set-ExecutionPolicy Bypass -Scope Process -Force; .venv\Scripts\Activate.ps1

Run:

dvzr
usage: divisor --model-type dev --quantization <args>

divisor - low-level diffusion prototyping

options:
  -h, --help            show this help message and exit
  --quantization        Enable quantization (fp8, e5m2, e4m3fn) for the model
  -m, --model-type {dev,schnell,dev2,mini,llm}
                        Model type to use: 'dev' (flux1-dev), 'schnell' (flux1-schnell), 'dev2' (flux2-dev), 'mini' (flux1-mini), 'llm' (MMaDA) Default:
                        dev

Valid arguments : --ae_id, --width, --height, --guidance, --seed, --prompt, --tiny, --device, --num_steps, --loop, --offload, --compile,
--verbose

dvzr pytest

About

fragment / denoise / process

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages