Skip to content

Commit

Permalink
docs: cleanup of compilation instructions
Browse files Browse the repository at this point in the history
  • Loading branch information
Titus-von-Koeller committed Jul 27, 2024
1 parent 81375f8 commit 24f7b65
Showing 1 changed file with 7 additions and 7 deletions.
14 changes: 7 additions & 7 deletions docs/source/installation.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

## CUDA

bitsandbytes is only supported on CUDA GPUs for CUDA versions **11.0 - 12.5**. There's a multi-backend effort under way which is currently in alpha release, see further down in this document.
bitsandbytes is only supported on CUDA GPUs for CUDA versions **11.0 - 12.5**. However, there's a multi-backend effort under way which is currently in alpha release, check [the respective section below in case you're interested to help us with early feedback](#multi-backend).

The latest version of bitsandbytes builds on:

Expand Down Expand Up @@ -134,7 +134,7 @@ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/YOUR_USERNAME/local/cuda-11.7

3. Now when you launch bitsandbytes with these environment variables, the PyTorch CUDA version is overridden by the new CUDA version (in this example, version 11.7) and a different bitsandbytes library is loaded.

## Multi-backend preview release (+ compilation)
## Multi-backend preview release compilation[[multi-backend]]

Please follow these steps to install bitsandbytes with device-specific backend support other than CUDA:

Expand All @@ -143,11 +143,10 @@ Please follow these steps to install bitsandbytes with device-specific backend s

### AMD GPU

For a ROCm specific install:
bitsandbytes is fully supported from ROCm 6.1 onwards (currently in alpha release).

bitsandbytes is fully supported from ROCm 6.1.

**Note:** If you already installed ROCm and PyTorch, skip docker steps below and please check that the torch version matches your ROCm install. To install torch for a specific ROCm version, please refer to step 3 of wheels install in [Installing PyTorch for ROCm](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/3rd-party/pytorch-install.html#using-wheels-package) guide.
> [!TIP]
> If you already installed ROCm and PyTorch, skip Docker steps below and please check that the torch version matches your ROCm install. To install torch for a specific ROCm version, please refer to step 3 of wheels install in [Installing PyTorch for ROCm](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/3rd-party/pytorch-install.html#using-wheels-package) guide.
```bash
# Create a docker container with latest pytorch. It comes with ROCm and pytorch preinstalled
Expand All @@ -161,6 +160,7 @@ git clone --depth 1 -b multi-backend-refactor https://github.com/TimDettmers/bit
pip install -r requirements-dev.txt

# Compile & install
apt-get install -y build-essential cmake # install build tools dependencies, unless present
cmake -DCOMPUTE_BACKEND=hip -S . # Use -DBNB_ROCM_ARCH="gfx90a;gfx942" to target specific gpu arch
make
pip install -e . # `-e` for "editable" install, when developing BNB (otherwise leave that out)
Expand All @@ -179,7 +179,7 @@ Similar to the CUDA case, you can compile bitsandbytes from source for Linux and
The below commands are for Linux. For installing on Windows, please adapt the below commands according to the same pattern as described [the section above on compiling from source under the Windows tab](#compile).

```
git clone --branch multi-backend-refactor https://github.com/TimDettmers/bitsandbytes.git && cd bitsandbytes/
git clone --depth 1 -b multi-backend-refactor https://github.com/TimDettmers/bitsandbytes.git && cd bitsandbytes/
pip install intel_extension_for_pytorch
pip install -r requirements-dev.txt
cmake -DCOMPUTE_BACKEND=cpu -S .
Expand Down

0 comments on commit 24f7b65

Please sign in to comment.