Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ModuleNotFoundError: No module named 'accelerate' #962

Open
barisuzar opened this issue Nov 18, 2024 · 0 comments
Open

ModuleNotFoundError: No module named 'accelerate' #962

barisuzar opened this issue Nov 18, 2024 · 0 comments

Comments

@barisuzar
Copy link

Hello, I am getting "ModuleNotFoundError: No module named 'accelerate'" error while performing the conversion operation ( pth_to_hf) using the following command even though 'accelerate 1.1.1' is installed.

"xtuner convert pth_to_hf pallava_domain_alignment.py domain_alignment_weight.pth domain_alignment_weight_ft"

The Python version I'm currently using is 3.10.2

You can see some of the installed packages at the bottom.

(new_xtuner_env) D:\new_xtuner_env>xtuner convert pth_to_hf pallava_domain_alignment.py domain_alignment_weight.pth domain_alignment_weight_ft
D:\new_xtuner_env\lib\site-packages\mmengine\optim\optimizer\zero_optimizer.py:11: DeprecationWarning: `TorchScript` support for functional optimizers is deprecated and will be removed in a future PyTorch release. Consider using the `torch.compile` optimizer instead.
  from torch.distributed.optim import \
[2024-11-18 12:20:06,556] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
D:\new_xtuner_env\lib\site-packages\deepspeed\runtime\zero\linear.py:49: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
  def forward(ctx, input, weight, bias=None):
D:\new_xtuner_env\lib\site-packages\deepspeed\runtime\zero\linear.py:67: FutureWarning: `torch.cuda.amp.custom_bwd(args...)` is deprecated. Please use `torch.amp.custom_bwd(args..., device_type='cuda')` instead.
  def backward(ctx, grad_output):
W1118 12:20:06.763703 2816 torch\distributed\elastic\multiprocessing\redirects.py:28] NOTE: Redirects are currently not supported in Windows or MacOs.
Traceback (most recent call last):
  File "D:\new_xtuner_env\lib\site-packages\xtuner\tools\model_converters\pth_to_hf.py", line 7, in <module>
    from accelerate import init_empty_weights
ModuleNotFoundError: No module named 'accelerate'

I tried to updating numpy from 1.26.4 to different numpy versions greater than 2.0.0. This seems to have worked because I didn't get any errors regarding 'accelerate'. But this time the output was:

File "D:\new_xtuner_env\lib\site-packages\xtuner\tools\model_converters\pth_to_hf.py", line 7, in <module>
    from accelerate import init_empty_weights
  File "D:\new_xtuner_env\lib\site-packages\accelerate\__init__.py", line 16, in <module>
    from .accelerator import Accelerator
  File "D:\new_xtuner_env\lib\site-packages\accelerate\accelerator.py", line 32, in <module>
    import torch
  File "D:\new_xtuner_env\lib\site-packages\torch\__init__.py", line 2120, in <module>
    from torch._higher_order_ops import cond
  File "D:\new_xtuner_env\lib\site-packages\torch\_higher_order_ops\__init__.py", line 1, in <module>
    from .cond import cond
  File "D:\new_xtuner_env\lib\site-packages\torch\_higher_order_ops\cond.py", line 5, in <module>
    import torch._subclasses.functional_tensor
  File "D:\new_xtuner_env\lib\site-packages\torch\_subclasses\functional_tensor.py", line 42, in <module>
    class FunctionalTensor(torch.Tensor):
  File "D:\new_xtuner_env\lib\site-packages\torch\_subclasses\functional_tensor.py", line 258, in FunctionalTensor
    cpu = _conversion_method_template(device=torch.device("cpu"))
D:\new_xtuner_env\lib\site-packages\torch\_subclasses\functional_tensor.py:258: UserWarning: Failed to initialize NumPy: _ARRAY_API not found (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\torch\csrc\utils\tensor_numpy.cpp:84.)
  cpu = _conversion_method_template(device=torch.device("cpu"))
D:\new_xtuner_env\lib\site-packages\mmengine\optim\optimizer\zero_optimizer.py:11: DeprecationWarning: `TorchScript` support for functional optimizers is deprecated and will be removed in a future PyTorch release. Consider using the `torch.compile` optimizer instead.
  from torch.distributed.optim import \
[2024-11-18 12:17:22,979] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
D:\new_xtuner_env\lib\site-packages\deepspeed\runtime\zero\linear.py:49: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
  def forward(ctx, input, weight, bias=None):
D:\new_xtuner_env\lib\site-packages\deepspeed\runtime\zero\linear.py:67: FutureWarning: `torch.cuda.amp.custom_bwd(args...)` is deprecated. Please use `torch.amp.custom_bwd(args..., device_type='cuda')` instead.
  def backward(ctx, grad_output):
W1118 12:17:23.193279 7808 torch\distributed\elastic\multiprocessing\redirects.py:28] NOTE: Redirects are currently not supported in Windows or MacOs.
11/18 12:17:23 - mmengine - WARNING - WARNING: command error: 'cannot import name 'BUFSIZE' from 'numpy' (D:\new_xtuner_env\lib\site-packages\numpy\__init__.py)'!
11/18 12:17:23 - mmengine - WARNING -
    Arguments received: ['xtuner', 'convert', 'pth_to_hf', 'pallava_domain_alignment.py', 'domain_alignment_weight.pth', 'domain_alignment_weight_ft']. xtuner commands use the following syntax:                                                                            

Libs used:

accelerate 1.1.1
deepspeed 0.11.2+unknown
mmengine 0.10.5
numpy 1.26.4
pip 24.3.1
tokenizers 0.20.3
torch 2.4.0+cu118
torchaudio 2.4.0+cu118
torchvision 0.19.0+cu118
transformers 4.46.2
transformers-stream-generator 0.0.5
xtuner 0.1.23

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant