Skip to content

bitsandbytes-foundation/bitsandbytes

This branch is 1 commit ahead of, 22 commits behind main.

Folders and files

NameName
Last commit message
Last commit date

Latest commit

6401393 Β· Jan 23, 2025
Jan 22, 2025
Feb 4, 2024
Dec 5, 2024
Dec 17, 2024
Jan 14, 2025
Jan 14, 2025
Mar 13, 2024
Aug 22, 2024
Mar 13, 2024
Dec 5, 2024
Feb 4, 2024
Apr 11, 2024
Dec 5, 2024
Oct 23, 2024
Jan 22, 2025
Jan 22, 2025
Oct 6, 2021
May 7, 2024
Oct 6, 2021
Oct 6, 2021
Oct 14, 2024
Oct 23, 2024
Mar 13, 2024
Feb 5, 2024
Feb 5, 2024
Feb 4, 2024
Dec 2, 2024
Dec 2, 2024
Jan 23, 2025
Dec 17, 2024

Repository files navigation

bitsandbytes

Downloads Downloads Downloads

The bitsandbytes library is a lightweight Python wrapper around CUDA custom functions, in particular 8-bit optimizers, matrix multiplication (LLM.int8()), and 8 & 4-bit quantization functions.

The library includes quantization primitives for 8-bit & 4-bit operations, through bitsandbytes.nn.Linear8bitLt and bitsandbytes.nn.Linear4bit and 8-bit optimizers through bitsandbytes.optim module.

There are ongoing efforts to support further hardware backends, i.e. Intel CPU + GPU, AMD GPU, Apple Silicon. Windows support is quite far along and is on its way as well.

Please head to the official documentation page:

https://huggingface.co/docs/bitsandbytes/main

bitsandbytes multi-backend alpha release is out!

πŸš€ Big news! After months of hard work and incredible community contributions, we're thrilled to announce the bitsandbytes multi-backend alpha release! πŸ’₯

Now supporting:

  • πŸ”₯ AMD GPUs (ROCm)
  • ⚑ Intel CPUs & GPUs

We’d love your early feedback! πŸ™

πŸ‘‰ Instructions for your pip install here

We're super excited about these recent developments and grateful for any constructive input or support that you can give to help us make this a reality (e.g. helping us with the upcoming Apple Silicon backend or reporting bugs). BNB is a community project and we're excited for your collaboration πŸ€—

License

bitsandbytes is MIT licensed.

We thank Fabio Cannizzo for his work on FastBinarySearch which we use for CPU quantization.