-
Notifications
You must be signed in to change notification settings - Fork 668
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
301ee80
commit 683a72b
Showing
12 changed files
with
135 additions
and
152 deletions.
There are no files selected for viewing
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,12 @@ | ||
# Other algorithms | ||
_WIP: Still incomplete... Community contributions would be greatly welcome!_ | ||
|
||
This is an overview of the algorithms in `bitsandbytes` that we think would also be useful as standalone entities. | ||
|
||
## Using Int8 Matrix Multiplication | ||
|
||
For straight Int8 matrix multiplication with mixed precision decomposition you can use ``bnb.matmul(...)``. To enable mixed precision decomposition, use the threshold parameter: | ||
|
||
```py | ||
bnb.matmul(..., threshold=6.0) | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,41 @@ | ||
# Compiling from Source[[compiling]] | ||
|
||
To compile from source, the CUDA Toolkit is required. Ensure `nvcc` is installed; if not, follow these steps to install it along with the CUDA Toolkit: | ||
|
||
```bash | ||
wget https://raw.githubusercontent.com/TimDettmers/bitsandbytes/main/install_cuda.sh | ||
# Use the following syntax: cuda_install CUDA_VERSION INSTALL_PREFIX EXPORT_TO_BASH | ||
# CUDA_VERSION options include 110 to 122 | ||
# EXPORT_TO_BASH: 0 for False, 1 for True | ||
|
||
# Example for installing CUDA 11.7 at ~/local/cuda-11.7 and exporting the path to .bashrc: | ||
bash install_cuda.sh 117 ~/local 1 | ||
``` | ||
|
||
For a single compile run with a specific CUDA version, set `CUDA_HOME` to point to your CUDA installation directory. For instance, to compile using CUDA 11.7 located at `~/local/cuda-11.7`, use: | ||
|
||
``` | ||
CUDA_HOME=~/local/cuda-11.7 CUDA_VERSION=117 make cuda11x | ||
``` | ||
|
||
## General Compilation Steps | ||
|
||
1. Use `CUDA_VERSION=XXX make [target]` to compile, where `[target]` includes options like `cuda92`, `cuda10x`, `cuda11x`, and others. | ||
2. Install with `python setup.py install`. | ||
|
||
Ensure `nvcc` is available in your system. If using Anaconda, determine your CUDA version with PyTorch using `conda list | grep cudatoolkit` and match it by downloading the corresponding version from the [CUDA Toolkit Archive](https://developer.nvidia.com/cuda-toolkit-archive). | ||
|
||
To install CUDA locally without administrative rights: | ||
|
||
```bash | ||
wget https://raw.githubusercontent.com/TimDettmers/bitsandbytes/main/install_cuda.sh | ||
# Follow the same syntax and example as mentioned earlier | ||
``` | ||
|
||
The compilation process relies on the `CUDA_HOME` environment variable to locate CUDA. If `CUDA_HOME` is unset, it will attempt to infer the location from `nvcc`. If `nvcc` is not in your path, you may need to add it or set `CUDA_HOME` manually. For example, if `python -m bitsandbytes` indicates your CUDA path as `/usr/local/cuda-11.7`, you can set `CUDA_HOME` to this path. | ||
|
||
If compilation issues arise, please report them. | ||
|
||
## Compilation for Kepler Architecture | ||
|
||
From version 0.39.1, bitsandbytes no longer includes Kepler binaries in pip installations, requiring manual compilation. Follow the general steps and use `cuda11x_nomatmul_kepler` for Kepler-targeted compilation. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,13 +1,17 @@ | ||
# Contributors guidelines | ||
... stil under construction ... (feel free to propose materials, `bitsandbytes` is a community project) | ||
|
||
# Setup pre-commit hooks | ||
## Setup pre-commit hooks | ||
- Install pre-commit hooks with `pip install pre-commit`. | ||
- Run `pre-commit autoupdate` once to configure the hooks. | ||
- Re-run `pre-commit autoupdate` every time a new hook got added. | ||
|
||
Now all the pre-commit hooks will be automatically run when you try to commit and if they introduce some changes, you need to re-add the changed files before being able to commit and push. | ||
|
||
## Doc-string syntax | ||
|
||
TODO: Add description + reference of HF docstring best practices. | ||
|
||
## Documentation | ||
- [guideline for documentation syntax](https://github.com/huggingface/doc-builder#readme) | ||
- images shall be uploaded via PR in the `bitsandbytes/` directory [here](https://huggingface.co/datasets/huggingface/documentation-images) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,21 +1,25 @@ | ||
# No kernel image available | ||
# Errors & Solutions | ||
|
||
This problem arises with the cuda version loaded by bitsandbytes is not supported by your GPU, or if you pytorch CUDA version mismatches. To solve this problem you need to debug ``$LD_LIBRARY_PATH``, ``$CUDA_HOME``, ``$PATH``. You can print these via ``echo $PATH``. You should look for multiple paths to different CUDA versions. This can include versions in your anaconda path, for example ``$HOME/anaconda3/lib``. You can check those versions via ``ls -l $HOME/anaconda3/lib/*cuda*`` or equivalent paths. Look at the CUDA versions of files in these paths. Does it match with ``nvidia-smi``? | ||
## No kernel image available | ||
|
||
If you are feeling lucky, you can also try to compile the library from source. This can be still problematic if your PATH variables have multiple cuda versions. As such, it is recommended to figure out path conflicts before you proceed with compilation. | ||
This problem arises with the cuda version loaded by bitsandbytes is not supported by your GPU, or if you pytorch CUDA version mismatches. | ||
|
||
To solve this problem you need to debug ``$LD_LIBRARY_PATH``, ``$CUDA_HOME``, ``$PATH``. You can print these via ``echo $PATH``. You should look for multiple paths to different CUDA versions. This can include versions in your anaconda path, for example ``$HOME/anaconda3/lib``. You can check those versions via ``ls -l $HOME/anaconda3/lib/*cuda*`` or equivalent paths. Look at the CUDA versions of files in these paths. Does it match with ``nvidia-smi``? | ||
|
||
If you are feeling lucky, you can also try to compile the library from source. This can be still problematic if your PATH variables have multiple cuda versions. As such, it is recommended to figure out path conflicts before you proceed with compilation. | ||
|
||
__If you encounter any other error not listed here please create an issue. This will help resolve your problem and will help out others in the future. | ||
|
||
|
||
# fatbinwrap | ||
## fatbinwrap | ||
|
||
This error occurs if there is a mismatch between CUDA versions in the C++ library and the CUDA part. Make sure you have right CUDA in your `$PATH` and `$LD_LIBRARY_PATH` variable. In the conda base environment you can find the library under: | ||
|
||
This error occurs if there is a mismatch between CUDA versions in the C++ library and the CUDA part. Make sure you have right CUDA in your $PATH and $LD_LIBRARY_PATH variable. In the conda base environment you can find the library under: | ||
```bash | ||
ls $CONDA_PREFIX/lib/*cudart* | ||
``` | ||
Make sure this path is appended to the `LD_LIBRARY_PATH` so bnb can find the CUDA runtime environment library (cudart). | ||
|
||
If this does not fix the issue, please try [compilation from source](compile_from_source.md) next. | ||
If this does not fix the issue, please try compilation from source next. | ||
|
||
If this does not work, please open an issue and paste the printed environment if you call `make` and the associated error when running bnb. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.