language | library_name | license_name | |
---|---|---|---|
|
nnll |
MPL-2.0 + Commons Clause 1.0 |
nnll
(or null)is a toolkit for researchers and developers working with AI models like Diffusion and Large Language Models (LLMs). It provides modular, reusable, and efficient components as a foundation to simplify the process of building and managing these complex systems.
- Generative AI pipeline preparation & execution
- Extracting and classifying metadata from images/models
- Consumer-grade GPU/CPU inference optimization
- Misc UX/UI Experimentation
- 🧨Diffusers, 🤗Transformers, 🦙Ollama, 🍏MLX, 🌀DSPy, 🚅LiteLLM
usage: mir-add --domain info --arch lora --series slam --compatibility sd1_series \
-k {'repo':'alimama-creative/slam-sd1.5', 'pkg':{0: {'diffusers': 'load_lora_weights'}}}
Manually add entries to MIR database.
Offline function.
options:
-h, --help show this help message and exit
-d, --domain DOMAIN Broad name of the type of data (model/ops/info/dev)
-a, --arch ARCH Common name of the neural network structure being referenced
-s, --series SERIES Specific release title or technique
-c, --compatibility COMPATIBILITY
Details about purpose, tasks
-k, --kwargs KWARGS Keyword arguments to pass to function constructors (default: NOne)
MIR Class attributes:
Domain: ['dev', 'info', 'model', 'ops']
Ops: ['pkg', 'repo']
Info: ['repo', 'pkg', 'file_256', 'layer_256', 'file_b3', 'layer_b3', 'identifier']
Dev: ['stage', 'dtype', 'dep_pkg', 'gen_kwargs', 'lora_kwargs', 'module_alt', 'module_path', 'repo_pkg', 'requires', 'scheduler_alt', 'scheduler_kwargs_alt', 'scheduler_kwargs', 'scheduler', 'tasks', 'weight_map']
A link to example output of the mir-add
command
usage: mir-maid
Build a custom MIR model database from the currently installed system environment.
Offline function.
options:
-h, --help show this help message and exit
Output:
2025-08-03 14:22:47 INFO ('Available torch devices: mps',)
2025-08-03 14:22:47 INFO ('Wrote #### lines to MIR database file.',)
A link to example output of the mir-maid
command
usage: mir-tasks
Scrape the task classes from currently installed libraries and attach them to an existing MIR database.
Offline function.
options:
-h, --help show this help message and exit
Should be used after `mir-maid`.
Output:
INFO ('Wrote #### lines to MIR database file.',)
A link to example output of the mir-tasks
command
usage: nnll-autocard black-forest-labs/FLUX.1-Krea-dev -u exdysa -f FLUX.1-Krea-dev-MLX -l mlx -q 8
Create a new HuggingFace RepoCard.
Retrieve HuggingFace repository data, fill out missing metadata,create a model card.
Optionally download and quantize repo to a desired folder that will be ready for upload.
Online function.
positional arguments:
repo Relative path to HF repository
options:
-h, --help show this help message and exit
-l, --library {gguf,schnell,dev,mlx}
Output model type [gguf,mlx,dev,schnell] (optional, default: 'mlx') NOTE: dev/schnell use MFLUX.
-q, --quantization {8,6,4,3,2}
Set quantization level (optional, default: None)
-d, --dry_run Perform a dry run, reading and generating a repo card without converting the model (optional, default: False)
-u, --user USER User for generated repo card (optional)
-f, --folder FOLDER Folder path for downloading (optional, default: /Users/unauthorized/Downloads)
-p, --prompt PROMPT A prompt for the code example (optional, default: 'Test Prompt')
**Valid pipeline tags**:
text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, other
A link to example output of the nnll-autocard
command
usage: nnll-autohash [-h] repo
Generate hashes for files or state dicts located remotely or in a cached repo
positional arguments:
repo Relative path to repository
options:
-h, --help show this help message and exit
A link to example output of the nnll-autohash
command
usage: nnll-hash '~/Downloads/models/'
Output hashes of each model file state dict in [path] to console and .JSON
Offline function.
positional arguments:
path Path to the directory where files should be analyzed. (default '.'')
options:
-h, --help show this help message and exit
-f, --file Change mode to calculate hash for the whole file instead of state dict layers (default: False)
-s, --sha Change algorithm from BLAKE3 to SHA256 (default: False)
-d, --describe, --describe-process
Include processing metadata in the output (default: True)
-u, --unsafe Try to hash non-standard type model files. MAY INCLUDE NON-MODEL FILES. (default: False)
A link to an output of the nnll-hash
command
usage: nnll-layer adaln
Recursively search for layer name metadata in state dict .JSON files of the current folder.
Print filenames with matching layers to console along with the first matching layer's corresponding shape, and tensor counts.
Offline function.
positional arguments:
pattern Pattern to search for
options:
-h, --help show this help message and exit
Output:
2025-08-03 14:57:10 INFO ('./Pixart-Sigma-XL-2-2k-ms.diffusers.safetensors.json', {'shape': [1152], 'tensors': 604}) console.py:84
INFO ('./PixartXL-2-1024-ms.diffusers.safetensors.json', {'shape': [384], 'tensors': 613}) console.py:84
INFO ('./flash-pixart-a.safetensors.json', {'shape': [64, 256], 'tensors': 587})
usage: nnll-meta ~/Downloads/models/images ~Downloads/models/metadata
Scan the state dict metadata from a folder of files at [path] to the console, then write to a json file at [save]
Offline function.
positional arguments:
path Path to directory where files should be analyzed. (default .)
options:
-h, --help show this help message and exit
-s, --save_to_folder_path SAVE_TO_FOLDER_PATH
Path where output should be stored. (default: '.')
-d, --separate_desc Ignore the metadata from the header. (default: False)
Valid input formats: ['.pth', '.onnx', '.pt', '.pickletensor', '.sft', '.ckpt', '.gguf', '.safetensors']
A link to example output of the nnll-meta
command
Each module contains 1-5 functions or 1-2 classes and its own test routines. There are multiple ways to integrate nnll into a project (sorted by level of involvement)
Install the entire project as a dependency via nnll @ git+https://github.com/darkshapes/nnll
Basic clone or fork of the project
Use a submodule
Filter a clone of the project to a single subfolder and include it in your own
Discussion topics, issue requests, reviews, and code updates are encouraged. Build with us! Talk to us in our Discord!