Skip to content
This repository was archived by the owner on May 21, 2025. It is now read-only.

[RuntimeError: CUDA is required but not available for bitsandbytes] #3

@zejun-chen

Description

@zejun-chen

Hi,

When using intel BNB for qlora model finetune in llama-factory, it is reported that only the CUDA is supported. Have you met the issue? Or do i need to import bitsandbytes_intel?

[ERROR|bitsandbytes.py:538] 2025-05-09 08:10:56,061 >> CUDA is required but not available for bitsandbytes. Please consider installing the multi-platform enabled version of bitsandbytes, which is currently a work in progress. Please check currently supported platforms and installation instructions at https://huggingface.co/docs/bitsandbytes/main/en/installation#multi-backend
Traceback (most recent call last):
  File "/home/sdp/miniforge3/envs/zejun_ccl/bin/llamafactory-cli", line 33, in <module>
    sys.exit(load_entry_point('llamafactory==0.9.3.dev0', 'console_scripts', 'llamafactory-cli')())
  File "/home/sdp/miniforge3/envs/zejun_ccl/lib/python3.10/site-packages/llamafactory-0.9.3.dev0-py3.10.egg/llamafactory/cli.py", line 115, in main
    COMMAND_MAP[command]()
  File "/home/sdp/miniforge3/envs/zejun_ccl/lib/python3.10/site-packages/llamafactory-0.9.3.dev0-py3.10.egg/llamafactory/train/tuner.py", line 110, in run_exp
    _training_function(config={"args": args, "callbacks": callbacks})
  File "/home/sdp/miniforge3/envs/zejun_ccl/lib/python3.10/site-packages/llamafactory-0.9.3.dev0-py3.10.egg/llamafactory/train/tuner.py", line 72, in _training_function
    run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
  File "/home/sdp/miniforge3/envs/zejun_ccl/lib/python3.10/site-packages/llamafactory-0.9.3.dev0-py3.10.egg/llamafactory/train/sft/workflow.py", line 52, in run_sft
    model = load_model(tokenizer, model_args, finetuning_args, training_args.do_train)
  File "/home/sdp/miniforge3/envs/zejun_ccl/lib/python3.10/site-packages/llamafactory-0.9.3.dev0-py3.10.egg/llamafactory/model/loader.py", line 167, in load_model
    model = load_class.from_pretrained(**init_kwargs)
  File "/home/sdp/miniforge3/envs/zejun_ccl/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 571, in from_pretrained
    return model_class.from_pretrained(
  File "/home/sdp/miniforge3/envs/zejun_ccl/lib/python3.10/site-packages/transformers/modeling_utils.py", line 279, in _wrapper
    return func(*args, **kwargs)
  File "/home/sdp/miniforge3/envs/zejun_ccl/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4228, in from_pretrained
    hf_quantizer.validate_environment(
  File "/home/sdp/miniforge3/envs/zejun_ccl/lib/python3.10/site-packages/transformers/quantizers/quantizer_bnb_4bit.py", line 84, in validate_environment
    validate_bnb_backend_availability(raise_exception=True)
  File "/home/sdp/miniforge3/envs/zejun_ccl/lib/python3.10/site-packages/transformers/integrations/bitsandbytes.py", line 561, in validate_bnb_backend_availability
    return _validate_bnb_cuda_backend_availability(raise_exception)
  File "/home/sdp/miniforge3/envs/zejun_ccl/lib/python3.10/site-packages/transformers/integrations/bitsandbytes.py", line 539, in _validate_bnb_cuda_backend_availability
    raise RuntimeError(log_msg)
RuntimeError: CUDA is required but not available for bitsandbytes. Please consider installing the multi-platform enabled version of bitsandbytes, which is currently a work in progress. Please check currently supported platforms and installation instructions at https://huggingface.co/docs/bitsandbytes/main/en/installation#multi-backend

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions