From 20a4fab8ddb9be220ccdd4f2102392339482d916 Mon Sep 17 00:00:00 2001 From: Titus von Koeller <9048635+Titus-von-Koeller@users.noreply.github.com> Date: Mon, 30 Sep 2024 19:14:44 +0000 Subject: [PATCH 01/11] refine docs for multi-backend alpha release --- docs/source/installation.mdx | 87 ++++++++++++++++++++++++++++++------ 1 file changed, 74 insertions(+), 13 deletions(-) diff --git a/docs/source/installation.mdx b/docs/source/installation.mdx index 2f82c199b..7308ba0f4 100644 --- a/docs/source/installation.mdx +++ b/docs/source/installation.mdx @@ -134,28 +134,31 @@ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/YOUR_USERNAME/local/cuda-11.7 3. Now when you launch bitsandbytes with these environment variables, the PyTorch CUDA version is overridden by the new CUDA version (in this example, version 11.7) and a different bitsandbytes library is loaded. -## Multi-backend[[multi-backend]] +## Multi-backend Support (Alpha Release)[[multi-backend]] > [!TIP] -> This functionality is currently in preview and therefore not yet production-ready! Please reference [this guide](./non_cuda_backends) for more in-depth information about the different backends and their current status. +> This functionality is currently in preview and not yet production-ready. We very much welcome community feedback, contributions and leadership on topics like Apple Silicon as well as other less common accellerators! For more information, see [this guide on multi-backend support](./non_cuda_backends). -Please follow these steps to install bitsandbytes with device-specific backend support other than CUDA: +### Supported Backends -### Pip install the pre-built wheel (recommended for most) +| **Backend** | **Supported Versions** | **Python versions** | **Architecture Support** | **Status** | +|-------------|------------------------|---------------------------|-------------------------|------------| +| **AMD ROCm** | 6.1+ | 3.10+ | minimum CDNA - `gfx90a`, RDNA - `gfx1100` | Alpha | +| **Apple Silicon (MPS)** | WIP | 3.10+ | M1/M2 chips | Planned | +| **Intel CPU** | v2.4.0+ (`ipex`) | 3.10+ | Intel CPU | Alpha | +| **Intel GPU** | v2.4.0+ (`ipex`) | 3.10+ | Intel GPU | Experimental | -WIP (will be added in the coming days) +For each supported backend, follow the respective instructions below: -### Compilation +### Pre-requisites - -#### AMD GPU - -bitsandbytes is fully supported from ROCm 6.1 onwards (currently in alpha release). +- Precompiled binaries are only built for ROCm versions `6.1.0`/`6.1.1`/`6.1.2`/`6.2.0` and `gfx90a`, `gfx942`, `gfx1100` GPU architectures. +- Other supported versions that don't come with pre-compiled binaries [can be compiled for with these instructions](#multi-backend-compile). > [!TIP] -> If you would like to install ROCm and PyTorch on bare metal, skip Docker steps and refer to our official guides at [ROCm installation overview](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/tutorial/install-overview.html#rocm-install-overview) and [Installing PyTorch for ROCm](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/3rd-party/pytorch-install.html#using-wheels-package) (Step 3 of wheels build for quick installation). Please make sure to get PyTorch wheel for the installed ROCm version. +> If you would like to install ROCm and PyTorch on bare metal, skip the Docker steps and refer to ROCm's official guides at [ROCm installation overview](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/tutorial/install-overview.html#rocm-install-overview) and [Installing PyTorch for ROCm](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/3rd-party/pytorch-install.html#using-wheels-package) (Step 3 of wheels build for quick installation). Special note: please make sure to get the respective ROCm-specific PyTorch wheel for the installed ROCm version, e.g. `https://download.pytorch.org/whl/nightly/rocm6.2/`! ```bash # Create a docker container with latest ROCm image, which includes ROCm libraries @@ -165,9 +168,67 @@ apt-get update && apt-get install -y git && cd home # Install pytorch compatible with above ROCm version pip install torch --index-url https://download.pytorch.org/whl/rocm6.1/ +``` + + + + +Compatible hardware and functioning `import intel_extension_for_pytorch as ipex` capable environment with Python `3.10` as the minimum requirement. + +Please refer to [the official Intel installations instructions](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=cpu&version=v2.4.0%2bcpu&os=linux%2fwsl2) for guidance on how to pip install the necessary `intel_extension_for_pytorch` dependency. + + + -# Install bitsandbytes from PyPI -# (This is supported on Ubuntu 22.04, Python 3.10, ROCm 6.1.0/6.1.1/6.1.2/6.2.0 and gpu arch - gfx90a, gfx942, gfx1100 +Apple Silicon support is still a WIP. Please visit and write us in [this Github Discussion space on coordinating the kickoff of MPS backend development](#PLACEHOLDER) and coordinate a community-led effort to implement this backend. + + + + +### Installation + +You can install the pre-built wheels for each backend, or compile from source for custom configurations. + +#### Pre-built Wheel Installation (recommended) + + + + +``` +pip install --force-reinstall 'PLACEHOLDER' +``` + + + + +``` +pip install --force-reinstall 'PLACEHOLDER' +``` + + + + + +> [!WARNING] +> bitsandbytes does not yet support Apple Silicon / Metal with a dedicated backend. However, the build infrastructure is in place and the below pip install will eventually provide Apple Silicon support as it comes available on the `multi-backend-refactor` branch based on community contributions. If you're interested to work on this (we might even have limited funding the the Mozilla foundation), please contact us by tagging @Titus-von-Koeller and @matthewdouglas in a Github discussion on our repo. + +``` +pip install --force-reinstall 'PLACEHOLDER' +``` + + + + +#### Compile from Source[[multi-backend-compile]] + + + + +#### AMD GPU + +bitsandbytes is fully supported from ROCm 6.1 onwards (currently in alpha release). + +```bash # Please install from source if your configuration doesn't match with these) pip install bitsandbytes From a7d52e8e557ab80ba08b2eca87cf274a712a0bc7 Mon Sep 17 00:00:00 2001 From: Titus von Koeller <9048635+Titus-von-Koeller@users.noreply.github.com> Date: Mon, 30 Sep 2024 21:36:56 +0000 Subject: [PATCH 02/11] docs: further tweaks to multi-backend alpha docs --- docs/source/installation.mdx | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/docs/source/installation.mdx b/docs/source/installation.mdx index 7308ba0f4..9c8df22b5 100644 --- a/docs/source/installation.mdx +++ b/docs/source/installation.mdx @@ -154,8 +154,10 @@ For each supported backend, follow the respective instructions below: -- Precompiled binaries are only built for ROCm versions `6.1.0`/`6.1.1`/`6.1.2`/`6.2.0` and `gfx90a`, `gfx942`, `gfx1100` GPU architectures. -- Other supported versions that don't come with pre-compiled binaries [can be compiled for with these instructions](#multi-backend-compile). +> [!WARNING] +> Pre-compiled binaries are only built for ROCm versions `6.1.0`/`6.1.1`/`6.1.2`/`6.2.0` and `gfx90a`, `gfx942`, `gfx1100` GPU architectures. [Find the pip install instructions here](#multi-backend-pip). +> +> Other supported versions that don't come with pre-compiled binaries [can be compiled for with these instructions](#multi-backend-compile). > [!TIP] > If you would like to install ROCm and PyTorch on bare metal, skip the Docker steps and refer to ROCm's official guides at [ROCm installation overview](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/tutorial/install-overview.html#rocm-install-overview) and [Installing PyTorch for ROCm](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/3rd-party/pytorch-install.html#using-wheels-package) (Step 3 of wheels build for quick installation). Special note: please make sure to get the respective ROCm-specific PyTorch wheel for the installed ROCm version, e.g. `https://download.pytorch.org/whl/nightly/rocm6.2/`! @@ -189,20 +191,20 @@ Apple Silicon support is still a WIP. Please visit and write us in [this Github You can install the pre-built wheels for each backend, or compile from source for custom configurations. -#### Pre-built Wheel Installation (recommended) +#### Pre-built Wheel Installation (recommended)[[multi-backend-pip]] ``` -pip install --force-reinstall 'PLACEHOLDER' +pip install --force-reinstall 'https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_multi-backend-refactor/bitsandbytes-0.44.1.dev0-py3-none-manylinux_2_24_x86_64.whl' ``` ``` -pip install --force-reinstall 'PLACEHOLDER' +pip install --force-reinstall 'https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_multi-backend-refactor/bitsandbytes-0.44.1.dev0-py3-none-win_amd64.whl' ``` @@ -213,7 +215,7 @@ pip install --force-reinstall 'PLACEHOLDER' > bitsandbytes does not yet support Apple Silicon / Metal with a dedicated backend. However, the build infrastructure is in place and the below pip install will eventually provide Apple Silicon support as it comes available on the `multi-backend-refactor` branch based on community contributions. If you're interested to work on this (we might even have limited funding the the Mozilla foundation), please contact us by tagging @Titus-von-Koeller and @matthewdouglas in a Github discussion on our repo. ``` -pip install --force-reinstall 'PLACEHOLDER' +pip install --force-reinstall 'https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_multi-backend-refactor/bitsandbytes-0.44.1.dev0-py3-none-macosx_13_1_arm64.whl' ``` From e288a2017bfd28ee504570072175b466c395ecbe Mon Sep 17 00:00:00 2001 From: Titus von Koeller <9048635+Titus-von-Koeller@users.noreply.github.com> Date: Mon, 30 Sep 2024 22:05:15 +0000 Subject: [PATCH 03/11] docs: further tweaks to multi-backend alpha docs --- docs/source/installation.mdx | 79 +++++++++++++++++++++++------------- 1 file changed, 50 insertions(+), 29 deletions(-) diff --git a/docs/source/installation.mdx b/docs/source/installation.mdx index 9c8df22b5..93c635a13 100644 --- a/docs/source/installation.mdx +++ b/docs/source/installation.mdx @@ -1,29 +1,45 @@ -# Installation +# Installation Guide -## CUDA +Welcome to the installation guide for the `bitsandbytes` library! This document provides step-by-step instructions to install `bitsandbytes` across various platforms and hardware configurations. The library primarily supports CUDA-based GPUs, but the team is actively working on enabling support for additional backends like AMD ROCm, Intel, and Apple Silicon. -bitsandbytes is only supported on CUDA GPUs for CUDA versions **11.0 - 12.5**. However, there's a multi-backend effort under way which is currently in alpha release, check [the respective section below in case you're interested to help us with early feedback](#multi-backend). +> [!TIP] +> For a high-level overview of backend support and compatibility, see the [Multi-backend Support](#multi-backend) section. -The latest version of bitsandbytes builds on: +## Table of Contents -| OS | CUDA | Compiler | -|---|---|---| -| Linux | 11.7 - 12.3 | GCC 11.4 | -| | 12.4+ | GCC 13.2 | -| Windows | 11.7 - 12.4 | MSVC 19.38+ (VS2022 17.8.0+) | +- [CUDA](#cuda) + - [Installation via PyPI](#cuda-pip) + - [Compile from Source](#cuda-compile) +- [Multi-backend Support (Alpha Release)](#multi-backend) + - [Supported Backends](#multi-backend-supported-backends) + - [Pre-requisites](#multi-backend-pre-requisites) + - [Installation](#multi-backend-pip) + - [Compile from Source](#multi-backend-compile) +- [PyTorch CUDA Versions](#pytorch-cuda-versions) -> [!TIP] -> MacOS support is still a work in progress! Subscribe to this [issue](https://github.com/TimDettmers/bitsandbytes/issues/1020) to get notified about discussions and to track the integration progress. +## CUDA[[cuda]] -For Linux systems, make sure your hardware meets the following requirements to use bitsandbytes features. +`bitsandbytes` is currently only supported on CUDA GPUs for CUDA versions **11.0 - 12.5**. However, there's an ongoing multi-backend effort under development, which is currently in alpha. If you're interested in providing feedback or testing, check out [the multi-backend section below](#multi-backend-support-alpha-release). -| **Feature** | **Hardware requirement** | -|---|---| -| LLM.int8() | NVIDIA Turing (RTX 20 series, T4) or Ampere (RTX 30 series, A4-A100) GPUs | -| 8-bit optimizers/quantization | NVIDIA Kepler (GTX 780 or newer) | +### Supported CUDA Configurations[[cuda-pip]] + +The latest version of `bitsandbytes` builds on the following configurations: + +| **OS** | **CUDA Version** | **Compiler** | +|-------------|------------------|----------------------| +| **Linux** | 11.7 - 12.3 | GCC 11.4 | +| | 12.4+ | GCC 13.2 | +| **Windows** | 11.7 - 12.4 | MSVC 19.38+ (VS2022) | + +For Linux systems, ensure your hardware meets the following requirements: + +| **Feature** | **Hardware Requirement** | +|---------------------------------|--------------------------------------------------------------------| +| LLM.int8() | NVIDIA Turing (RTX 20 series, T4) or Ampere (RTX 30 series, A4-A100) GPUs | +| 8-bit optimizers/quantization | NVIDIA Kepler (GTX 780 or newer) | > [!WARNING] -> bitsandbytes >= 0.39.1 no longer includes Kepler binaries in pip installations. This requires manual compilation, and you should follow the general steps and use `cuda11x_nomatmul_kepler` for Kepler-targeted compilation. +> `bitsandbytes >= 0.39.1` no longer includes Kepler binaries in pip installations. This requires [manual compilation using](#cuda-compile) the `cuda11x_nomatmul_kepler` configuration. To install from PyPI. @@ -31,14 +47,19 @@ To install from PyPI. pip install bitsandbytes ``` -### Compile from source[[compile]] +### Compile from source[[cuda-compile]] + +> [!TIP] +> Don't hesitate to compile from source! The process is pretty straight forward and resilient. This might be needed for older CUDA versions or other less common configurations, which we don't support out of the box due to package size. -For Linux and Windows systems, you can compile bitsandbytes from source. Installing from source allows for more build options with different CMake configurations. +For Linux and Windows systems, compiling from source allows you to customize the build configurations. See below for detailed platform-specific instructions (see the `CMakeLists.txt` if you want to check the specifics and explore some additional options): -To compile from source, you need CMake >= **3.22.1** and Python >= **3.8** installed. Make sure you have a compiler installed to compile C++ (gcc, make, headers, etc.). For example, to install a compiler and CMake on Ubuntu: +To compile from source, you need CMake >= **3.22.1** and Python >= **3.8** installed. Make sure you have a compiler installed to compile C++ (`gcc`, `make`, headers, etc.). + +For example, to install a compiler and CMake on Ubuntu: ```bash apt-get install -y build-essential cmake @@ -48,11 +69,11 @@ You should also install CUDA Toolkit by following the [NVIDIA CUDA Installation Refer to the following table if you're using another CUDA Toolkit version. -| CUDA Toolkit | GCC | -|---|---| -| >= 11.4.1 | >= 11 | -| >= 12.0 | >= 12 | -| >= 12.4 | >= 13 | +| CUDA Toolkit | GCC | +|--------------|-------| +| >= 11.4.1 | >= 11 | +| >= 12.0 | >= 12 | +| >= 12.4 | >= 13 | Now to install the bitsandbytes package from source, run the following commands: @@ -93,7 +114,7 @@ Big thanks to [wkpark](https://github.com/wkpark), [Jamezo97](https://github.com -### PyTorch CUDA versions +### PyTorch CUDA versions[[pytorch-cuda-versions]] Some bitsandbytes features may need a newer CUDA version than the one currently supported by PyTorch binaries from Conda and pip. In this case, you should follow these instructions to load a precompiled bitsandbytes binary. @@ -139,7 +160,7 @@ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/YOUR_USERNAME/local/cuda-11.7 > [!TIP] > This functionality is currently in preview and not yet production-ready. We very much welcome community feedback, contributions and leadership on topics like Apple Silicon as well as other less common accellerators! For more information, see [this guide on multi-backend support](./non_cuda_backends). -### Supported Backends +### Supported Backends[[multi-backend-supported-backends]] | **Backend** | **Supported Versions** | **Python versions** | **Architecture Support** | **Status** | |-------------|------------------------|---------------------------|-------------------------|------------| @@ -150,7 +171,7 @@ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/YOUR_USERNAME/local/cuda-11.7 For each supported backend, follow the respective instructions below: -### Pre-requisites +### Pre-requisites[[multi-backend-pre-requisites]] @@ -258,7 +279,7 @@ pip install -e . # `-e` for "editable" install, when developing BNB (otherwise Similar to the CUDA case, you can compile bitsandbytes from source for Linux and Windows systems. -The below commands are for Linux. For installing on Windows, please adapt the below commands according to the same pattern as described [the section above on compiling from source under the Windows tab](#compile). +The below commands are for Linux. For installing on Windows, please adapt the below commands according to the same pattern as described [the section above on compiling from source under the Windows tab](#cuda-compile). ``` git clone --depth 1 -b multi-backend-refactor https://github.com/TimDettmers/bitsandbytes.git && cd bitsandbytes/ From 73392b51e48330826e55a9efe6bc8b3ed900e8fd Mon Sep 17 00:00:00 2001 From: Titus von Koeller <9048635+Titus-von-Koeller@users.noreply.github.com> Date: Mon, 30 Sep 2024 22:19:34 +0000 Subject: [PATCH 04/11] docs: further tweaks to multi-backend alpha docs --- docs/source/installation.mdx | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/docs/source/installation.mdx b/docs/source/installation.mdx index 93c635a13..88828c995 100644 --- a/docs/source/installation.mdx +++ b/docs/source/installation.mdx @@ -203,7 +203,8 @@ Please refer to [the official Intel installations instructions](https://intel.gi -Apple Silicon support is still a WIP. Please visit and write us in [this Github Discussion space on coordinating the kickoff of MPS backend development](#PLACEHOLDER) and coordinate a community-led effort to implement this backend. +> [!TIP] +> Apple Silicon support is still a WIP. Please visit and write us in [this Github Discussion space on coordinating the kickoff of MPS backend development](https://github.com/bitsandbytes-foundation/bitsandbytes/discussions/1340) and coordinate a community-led effort to implement this backend. @@ -218,6 +219,7 @@ You can install the pre-built wheels for each backend, or compile from source fo ``` +# Note, if you don't want to reinstall BNBs dependencies, append the `--no-deps` flag! pip install --force-reinstall 'https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_multi-backend-refactor/bitsandbytes-0.44.1.dev0-py3-none-manylinux_2_24_x86_64.whl' ``` @@ -225,6 +227,7 @@ pip install --force-reinstall 'https://github.com/bitsandbytes-foundation/bitsan ``` +# Note, if you don't want to reinstall BNBs dependencies, append the `--no-deps` flag! pip install --force-reinstall 'https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_multi-backend-refactor/bitsandbytes-0.44.1.dev0-py3-none-win_amd64.whl' ``` @@ -233,9 +236,10 @@ pip install --force-reinstall 'https://github.com/bitsandbytes-foundation/bitsan > [!WARNING] -> bitsandbytes does not yet support Apple Silicon / Metal with a dedicated backend. However, the build infrastructure is in place and the below pip install will eventually provide Apple Silicon support as it comes available on the `multi-backend-refactor` branch based on community contributions. If you're interested to work on this (we might even have limited funding the the Mozilla foundation), please contact us by tagging @Titus-von-Koeller and @matthewdouglas in a Github discussion on our repo. +> bitsandbytes does not yet support Apple Silicon / Metal with a dedicated backend. However, the build infrastructure is in place and the below pip install will eventually provide Apple Silicon support as it becomes available on the `multi-backend-refactor` branch based on community contributions. ``` +# Note, if you don't want to reinstall BNBs dependencies, append the `--no-deps` flag! pip install --force-reinstall 'https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_multi-backend-refactor/bitsandbytes-0.44.1.dev0-py3-none-macosx_13_1_arm64.whl' ``` From 8e3900dab69bb3030a5072a1784a3e48b2866cbe Mon Sep 17 00:00:00 2001 From: Titus von Koeller <9048635+Titus-von-Koeller@users.noreply.github.com> Date: Mon, 30 Sep 2024 22:49:31 +0000 Subject: [PATCH 05/11] docs: add multi-backend feedback links --- docs/source/installation.mdx | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/docs/source/installation.mdx b/docs/source/installation.mdx index 88828c995..1219b8873 100644 --- a/docs/source/installation.mdx +++ b/docs/source/installation.mdx @@ -160,6 +160,26 @@ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/YOUR_USERNAME/local/cuda-11.7 > [!TIP] > This functionality is currently in preview and not yet production-ready. We very much welcome community feedback, contributions and leadership on topics like Apple Silicon as well as other less common accellerators! For more information, see [this guide on multi-backend support](./non_cuda_backends). +Link to give us feedback (bugs, install issues, perf results, requests, etc.): + + + + +[**Multi-backend refactor: Alpha release (AMD ROCm ONLY)**](https://github.com/bitsandbytes-foundation/bitsandbytes/discussions/1339) + + + + +[**Multi-backend refactor: Alpha release (INTEL ONLY)**](https://github.com/bitsandbytes-foundation/bitsandbytes/discussions/1338) + + + + +[**Github Discussion space on coordinating the kickoff of MPS backend development**](https://github.com/bitsandbytes-foundation/bitsandbytes/discussions/1340) + + + + ### Supported Backends[[multi-backend-supported-backends]] | **Backend** | **Supported Versions** | **Python versions** | **Architecture Support** | **Status** | From b03d9ded62affd71f22c6b8edf965234e1019dcd Mon Sep 17 00:00:00 2001 From: Titus von Koeller <9048635+Titus-von-Koeller@users.noreply.github.com> Date: Mon, 30 Sep 2024 22:52:10 +0000 Subject: [PATCH 06/11] docs: add request for contributions --- docs/source/non_cuda_backends.mdx | 3 +++ 1 file changed, 3 insertions(+) diff --git a/docs/source/non_cuda_backends.mdx b/docs/source/non_cuda_backends.mdx index fc7c6ac27..728606b7b 100644 --- a/docs/source/non_cuda_backends.mdx +++ b/docs/source/non_cuda_backends.mdx @@ -1,5 +1,8 @@ # Multi-backend support (non-CUDA backends) +> [!Tip] +> If you feel these docs need some additional info, please consider submitting a PR or respectfully request the missing info in one of the below mentioned Github discussion spaces. + As part of a recent refactoring effort, we will soon offer official multi-backend support. Currently, this feature is available in a preview alpha release, allowing us to gather early feedback from users to improve the functionality and identify any bugs. At present, the Intel CPU and AMD ROCm backends are considered fully functional. The Intel XPU backend has limited functionality and is less mature. From 58177fabdeb56ac1d2308afe9c33ee33d9db9998 Mon Sep 17 00:00:00 2001 From: Titus von Koeller <9048635+Titus-von-Koeller@users.noreply.github.com> Date: Mon, 30 Sep 2024 23:14:35 +0000 Subject: [PATCH 07/11] docs: small fixes --- docs/source/installation.mdx | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/source/installation.mdx b/docs/source/installation.mdx index 1219b8873..377a84023 100644 --- a/docs/source/installation.mdx +++ b/docs/source/installation.mdx @@ -19,7 +19,7 @@ Welcome to the installation guide for the `bitsandbytes` library! This document ## CUDA[[cuda]] -`bitsandbytes` is currently only supported on CUDA GPUs for CUDA versions **11.0 - 12.5**. However, there's an ongoing multi-backend effort under development, which is currently in alpha. If you're interested in providing feedback or testing, check out [the multi-backend section below](#multi-backend-support-alpha-release). +`bitsandbytes` is currently only supported on CUDA GPUs for CUDA versions **11.0 - 12.5**. However, there's an ongoing multi-backend effort under development, which is currently in alpha. If you're interested in providing feedback or testing, check out [the multi-backend section below](#multi-backend). ### Supported CUDA Configurations[[cuda-pip]] @@ -78,7 +78,7 @@ Refer to the following table if you're using another CUDA Toolkit version. Now to install the bitsandbytes package from source, run the following commands: ```bash -git clone https://github.com/TimDettmers/bitsandbytes.git && cd bitsandbytes/ +git clone https://github.com/bitsandbytes-foundation/bitsandbytes.git && cd bitsandbytes/ pip install -r requirements-dev.txt cmake -DCOMPUTE_BACKEND=cuda -S . make @@ -102,7 +102,7 @@ Refer to the following table if you're using another CUDA Toolkit version. | >= 11.6 | 19.30+ (VS2022) | ```bash -git clone https://github.com/TimDettmers/bitsandbytes.git && cd bitsandbytes/ +git clone https://github.com/bitsandbytes-foundation/bitsandbytes.git && cd bitsandbytes/ pip install -r requirements-dev.txt cmake -DCOMPUTE_BACKEND=cuda -S . cmake --build . --config Release @@ -126,7 +126,7 @@ Some bitsandbytes features may need a newer CUDA version than the one currently Then locally install the CUDA version you need with this script from bitsandbytes: ```bash -wget https://raw.githubusercontent.com/TimDettmers/bitsandbytes/main/install_cuda.sh +wget https://raw.githubusercontent.com/bitsandbytes-foundation/bitsandbytes/main/install_cuda.sh # Syntax cuda_install CUDA_VERSION INSTALL_PREFIX EXPORT_TO_BASH # CUDA_VERSION in {110, 111, 112, 113, 114, 115, 116, 117, 118, 120, 121, 122, 123, 124, 125} # EXPORT_TO_BASH in {0, 1} with 0=False and 1=True @@ -306,7 +306,7 @@ Similar to the CUDA case, you can compile bitsandbytes from source for Linux and The below commands are for Linux. For installing on Windows, please adapt the below commands according to the same pattern as described [the section above on compiling from source under the Windows tab](#cuda-compile). ``` -git clone --depth 1 -b multi-backend-refactor https://github.com/TimDettmers/bitsandbytes.git && cd bitsandbytes/ +git clone --depth 1 -b multi-backend-refactor https://github.com/bitsandbytes-foundation/bitsandbytes.git && cd bitsandbytes/ pip install intel_extension_for_pytorch pip install -r requirements-dev.txt cmake -DCOMPUTE_BACKEND=cpu -S . From ade37efe76ae0ad362c00725d9539935a6302958 Mon Sep 17 00:00:00 2001 From: Titus von Koeller <9048635+Titus-von-Koeller@users.noreply.github.com> Date: Mon, 30 Sep 2024 23:16:42 +0000 Subject: [PATCH 08/11] docs: small fixes --- docs/source/installation.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/installation.mdx b/docs/source/installation.mdx index 377a84023..ea9f88e48 100644 --- a/docs/source/installation.mdx +++ b/docs/source/installation.mdx @@ -160,7 +160,7 @@ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/YOUR_USERNAME/local/cuda-11.7 > [!TIP] > This functionality is currently in preview and not yet production-ready. We very much welcome community feedback, contributions and leadership on topics like Apple Silicon as well as other less common accellerators! For more information, see [this guide on multi-backend support](./non_cuda_backends). -Link to give us feedback (bugs, install issues, perf results, requests, etc.): +**Link to give us feedback** (bugs, install issues, perf results, requests, etc.)**:** From 508afef8adaaf56c96e542e8c859933f0a0fbfdb Mon Sep 17 00:00:00 2001 From: Titus von Koeller <9048635+Titus-von-Koeller@users.noreply.github.com> Date: Mon, 30 Sep 2024 23:25:46 +0000 Subject: [PATCH 09/11] docs: add info about `main` continuous build --- docs/source/installation.mdx | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/docs/source/installation.mdx b/docs/source/installation.mdx index ea9f88e48..205e1d17c 100644 --- a/docs/source/installation.mdx +++ b/docs/source/installation.mdx @@ -47,6 +47,28 @@ To install from PyPI. pip install bitsandbytes ``` +### `pip install` pre-built wheel from latest `main` commit + +If you would like to use new feature even before they are officially released and help us test them, feel free to install the wheel directly from our CI (*the wheel links will remain stable!*): + + + + +``` +# Note, if you don't want to reinstall BNBs dependencies, append the `--no-deps` flag! +pip install --force-reinstall 'https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_main/bitsandbytes-0.44.2.dev0-py3-none-manylinux_2_24_x86_64.whl' +``` + + + + +``` +# Note, if you don't want to reinstall BNBs dependencies, append the `--no-deps` flag! +pip install --force-reinstall 'https://github.com/bitsandbytes-foundation/bitsandbytes/releases/download/continuous-release_multi-backend-refactor/bitsandbytes-0.44.1.dev0-py3-none-macosx_13_1_arm64.whl' +``` + + + ### Compile from source[[cuda-compile]] > [!TIP] From f207da072e8c9eb8464bd55905a2938599ea2bbc Mon Sep 17 00:00:00 2001 From: Titus von Koeller <9048635+Titus-von-Koeller@users.noreply.github.com> Date: Mon, 30 Sep 2024 23:38:30 +0000 Subject: [PATCH 10/11] docs: further tweaks to multi-backend alpha docs --- docs/source/installation.mdx | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/docs/source/installation.mdx b/docs/source/installation.mdx index 205e1d17c..0adedbf6a 100644 --- a/docs/source/installation.mdx +++ b/docs/source/installation.mdx @@ -217,10 +217,13 @@ For each supported backend, follow the respective instructions below: + > [!WARNING] > Pre-compiled binaries are only built for ROCm versions `6.1.0`/`6.1.1`/`6.1.2`/`6.2.0` and `gfx90a`, `gfx942`, `gfx1100` GPU architectures. [Find the pip install instructions here](#multi-backend-pip). > > Other supported versions that don't come with pre-compiled binaries [can be compiled for with these instructions](#multi-backend-compile). +> +> **Windows is not supported for the ROCm backend**; also not WSL2 to our knowledge. > [!TIP] > If you would like to install ROCm and PyTorch on bare metal, skip the Docker steps and refer to ROCm's official guides at [ROCm installation overview](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/tutorial/install-overview.html#rocm-install-overview) and [Installing PyTorch for ROCm](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/3rd-party/pytorch-install.html#using-wheels-package) (Step 3 of wheels build for quick installation). Special note: please make sure to get the respective ROCm-specific PyTorch wheel for the installed ROCm version, e.g. `https://download.pytorch.org/whl/nightly/rocm6.2/`! @@ -276,7 +279,6 @@ pip install --force-reinstall 'https://github.com/bitsandbytes-foundation/bitsan - > [!WARNING] > bitsandbytes does not yet support Apple Silicon / Metal with a dedicated backend. However, the build infrastructure is in place and the below pip install will eventually provide Apple Silicon support as it becomes available on the `multi-backend-refactor` branch based on community contributions. From e923bcb99444609500d9cf18e1699af57d789720 Mon Sep 17 00:00:00 2001 From: Titus von Koeller <9048635+Titus-von-Koeller@users.noreply.github.com> Date: Mon, 30 Sep 2024 23:45:31 +0000 Subject: [PATCH 11/11] docs: further tweaks to multi-backend alpha docs --- docs/source/installation.mdx | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/docs/source/installation.mdx b/docs/source/installation.mdx index 0adedbf6a..2ac56e03f 100644 --- a/docs/source/installation.mdx +++ b/docs/source/installation.mdx @@ -215,6 +215,12 @@ For each supported backend, follow the respective instructions below: ### Pre-requisites[[multi-backend-pre-requisites]] +To use bitsandbytes non-CUDA backends, be sure to install: + +``` +pip install "transformers>=4.45.1" +``` +