Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

多卡微调报错该如何解决? #2776

Open
Fanxhion opened this issue Dec 26, 2024 · 1 comment
Open

多卡微调报错该如何解决? #2776

Fanxhion opened this issue Dec 26, 2024 · 1 comment

Comments

@Fanxhion
Copy link

【报错信息】
报错信息如下:(运行脚本贴在最后了)
run sh: /usr/local/bin/python -m torch.distributed.run --nproc_per_node 3 /usr/local/lib/python3.10/site-packages/swift/cli/sft.py --model_type qwen --model /app/ms-swift/model_cache/hub/qwen/Qwen1___5-7B-Chat --dataset /app/ms-swift/datasets/datasets/NER_dataset/ccfbdci_01.jsonl --train_type lora --torch_dtype bfloat16 --max_length 2048 --learning_rate 1e-4 --num_train_epochs 1 --check_model_is_latest False --output_dir output_NER --deepspeed zero2 --gradient_accumulation_steps 5 --save_steps 100 --logging_steps 10
WARNING:main:


Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.


2024-12-26 16:53:39.988897: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-26 16:53:40.034857: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-26 16:53:40.074963: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-12-26 16:53:41.188464: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2024-12-26 16:53:41.311281: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2024-12-26 16:53:41.341972: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
[INFO:swift] Successfully registered /usr/local/lib/python3.10/site-packages/swift/llm/dataset/data/dataset_info.json
[INFO:swift] Successfully registered []
[INFO:swift] rank: 0, local_rank: 0, world_size: 3, local_world_size: 3
[INFO:swift] Loading the model using model_dir: /app/ms-swift/model_cache/hub/qwen/Qwen1___5-7B-Chat
[INFO:swift] Using deepspeed: {'fp16': {'enabled': 'auto', 'loss_scale': 0, 'loss_scale_window': 1000, 'initial_scale_power': 16, 'hysteresis': 2, 'min_loss_scale': 1}, 'bf16': {'enabled': 'auto'}, 'optimizer': {'type': 'AdamW', 'params': {'lr': 'auto', 'betas': 'auto', 'eps': 'auto', 'weight_decay': 'auto'}}, 'scheduler': {'type': 'WarmupCosineLR', 'params': {'total_num_steps': 'auto', 'warmup_num_steps': 'auto'}}, 'zero_optimization': {'stage': 2, 'offload_optimizer': {'device': 'none', 'pin_memory': True}, 'allgather_partitions': True, 'allgather_bucket_size': 200000000.0, 'overlap_comm': True, 'reduce_scatter': True, 'reduce_bucket_size': 200000000.0, 'contiguous_gradients': True}, 'gradient_accumulation_steps': 'auto', 'gradient_clipping': 'auto', 'steps_per_print': 2000, 'train_batch_size': 'auto', 'train_micro_batch_size_per_gpu': 'auto', 'wall_clock_breakdown': False}
[INFO:swift] Setting args.lazy_tokenize: False
/usr/local/lib/python3.10/site-packages/transformers/training_args.py:1568: FutureWarning: evaluation_strategy is deprecated and will be removed in version 4.46 of ?? Transformers. Use eval_strategy instead
warnings.warn(
[2024-12-26 16:53:44,957] [INFO] [real_accelerator.py:219:get_accelerator] Setting ds_accelerator to cuda (auto detect)
/usr/local/lib/python3.10/site-packages/transformers/training_args.py:1568: FutureWarning: evaluation_strategy is deprecated and will be removed in version 4.46 of ?? Transformers. Use eval_strategy instead
warnings.warn(
[2024-12-26 16:53:45,145] [INFO] [real_accelerator.py:219:get_accelerator] Setting ds_accelerator to cuda (auto detect)
/usr/local/lib/python3.10/site-packages/transformers/training_args.py:1568: FutureWarning: evaluation_strategy is deprecated and will be removed in version 4.46 of ?? Transformers. Use eval_strategy instead
warnings.warn(
[2024-12-26 16:53:45,275] [INFO] [real_accelerator.py:219:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-12-26 16:53:47,739] [INFO] [comm.py:652:init_distributed] cdb=None
[2024-12-26 16:53:47,940] [INFO] [comm.py:652:init_distributed] cdb=None
[2024-12-26 16:53:47,941] [INFO] [comm.py:683:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
2edaa9ce4458:21609:21609 [0] NCCL INFO NCCL_SOCKET_IFNAME set by environment to eth0
2edaa9ce4458:21609:21609 [0] NCCL INFO Bootstrap : Using eth0:172.17.0.6<0>
2edaa9ce4458:21609:21609 [0] NCCL INFO NET/Plugin : dlerror=libnccl-net.so: cannot open shared object file: No such file or directory No plugin found (libnccl-net.so), using internal implementation
2edaa9ce4458:21609:21609 [0] NCCL INFO cudaDriverVersion 12040
NCCL version 2.20.5+cuda12.4
2edaa9ce4458:21611:21611 [2] NCCL INFO cudaDriverVersion 12040
2edaa9ce4458:21611:21611 [2] NCCL INFO NCCL_SOCKET_IFNAME set by environment to eth0
2edaa9ce4458:21611:21611 [2] NCCL INFO Bootstrap : Using eth0:172.17.0.6<0>
2edaa9ce4458:21611:21611 [2] NCCL INFO NET/Plugin : dlerror=libnccl-net.so: cannot open shared object file: No such file or directory No plugin found (libnccl-net.so), using internal implementation
2edaa9ce4458:21609:21718 [0] NCCL INFO NCCL_IB_DISABLE set by environment to 1.
2edaa9ce4458:21609:21718 [0] NCCL INFO NCCL_SOCKET_IFNAME set by environment to eth0
2edaa9ce4458:21609:21718 [0] NCCL INFO NET/Socket : Using [0]eth0:172.17.0.6<0>
2edaa9ce4458:21609:21718 [0] NCCL INFO Using non-device net plugin version 0
2edaa9ce4458:21609:21718 [0] NCCL INFO Using network Socket
2edaa9ce4458:21611:21719 [2] NCCL INFO NCCL_IB_DISABLE set by environment to 1.
2edaa9ce4458:21611:21719 [2] NCCL INFO NCCL_SOCKET_IFNAME set by environment to eth0
2edaa9ce4458:21611:21719 [2] NCCL INFO NET/Socket : Using [0]eth0:172.17.0.6<0>
2edaa9ce4458:21611:21719 [2] NCCL INFO Using non-device net plugin version 0
2edaa9ce4458:21611:21719 [2] NCCL INFO Using network Socket
[2024-12-26 16:53:48,269] [INFO] [comm.py:652:init_distributed] cdb=None
2edaa9ce4458:21610:21610 [1] NCCL INFO cudaDriverVersion 12040
2edaa9ce4458:21610:21610 [1] NCCL INFO NCCL_SOCKET_IFNAME set by environment to eth0
2edaa9ce4458:21610:21610 [1] NCCL INFO Bootstrap : Using eth0:172.17.0.6<0>
2edaa9ce4458:21610:21610 [1] NCCL INFO NET/Plugin : dlerror=libnccl-net.so: cannot open shared object file: No such file or directory No plugin found (libnccl-net.so), using internal implementation
2edaa9ce4458:21610:21723 [1] NCCL INFO NCCL_IB_DISABLE set by environment to 1.
2edaa9ce4458:21610:21723 [1] NCCL INFO NCCL_SOCKET_IFNAME set by environment to eth0
2edaa9ce4458:21610:21723 [1] NCCL INFO NET/Socket : Using [0]eth0:172.17.0.6<0>
2edaa9ce4458:21610:21723 [1] NCCL INFO Using non-device net plugin version 0
2edaa9ce4458:21610:21723 [1] NCCL INFO Using network Socket
2edaa9ce4458:21610:21723 [1] NCCL INFO comm 0x557947fe6110 rank 1 nranks 3 cudaDev 1 nvmlDev 2 busId 39000 commId 0x8c70b5596ee07786 - Init START
2edaa9ce4458:21609:21718 [0] NCCL INFO comm 0x558d3f0de350 rank 0 nranks 3 cudaDev 0 nvmlDev 1 busId 36000 commId 0x8c70b5596ee07786 - Init START
2edaa9ce4458:21611:21719 [2] NCCL INFO comm 0x563ea6445da0 rank 2 nranks 3 cudaDev 2 nvmlDev 3 busId 3d000 commId 0x8c70b5596ee07786 - Init START
2edaa9ce4458:21610:21723 [1] NCCL INFO NCCL_P2P_LEVEL set by environment to LOC
2edaa9ce4458:21610:21723 [1] NCCL INFO Setting affinity for GPU 2 to fc,00000000,00fc0000
2edaa9ce4458:21610:21723 [1] NCCL INFO NVLS multicast support is not available on dev 1
2edaa9ce4458:21609:21718 [0] NCCL INFO NCCL_P2P_LEVEL set by environment to LOC
2edaa9ce4458:21611:21719 [2] NCCL INFO NCCL_P2P_LEVEL set by environment to LOC
2edaa9ce4458:21609:21718 [0] NCCL INFO Setting affinity for GPU 1 to fc,00000000,00fc0000
2edaa9ce4458:21611:21719 [2] NCCL INFO Setting affinity for GPU 3 to fc,00000000,00fc0000
2edaa9ce4458:21609:21718 [0] NCCL INFO NVLS multicast support is not available on dev 0
2edaa9ce4458:21611:21719 [2] NCCL INFO NVLS multicast support is not available on dev 2
2edaa9ce4458:21611:21719 [2] NCCL INFO comm 0x563ea6445da0 rank 2 nRanks 3 nNodes 1 localRanks 3 localRank 2 MNNVL 0
2edaa9ce4458:21610:21723 [1] NCCL INFO comm 0x557947fe6110 rank 1 nRanks 3 nNodes 1 localRanks 3 localRank 1 MNNVL 0
2edaa9ce4458:21609:21718 [0] NCCL INFO comm 0x558d3f0de350 rank 0 nRanks 3 nNodes 1 localRanks 3 localRank 0 MNNVL 0
2edaa9ce4458:21611:21719 [2] NCCL INFO Trees [0] -1/-1/-1->2->1 [1] -1/-1/-1->2->1 [2] -1/-1/-1->2->1 [3] -1/-1/-1->2->1
2edaa9ce4458:21609:21718 [0] NCCL INFO Channel 00/04 : 0 1 2
2edaa9ce4458:21610:21723 [1] NCCL INFO Trees [0] 2/-1/-1->1->0 [1] 2/-1/-1->1->0 [2] 2/-1/-1->1->0 [3] 2/-1/-1->1->0
2edaa9ce4458:21611:21719 [2] NCCL INFO P2P Chunksize set to 131072
2edaa9ce4458:21609:21718 [0] NCCL INFO Channel 01/04 : 0 1 2
2edaa9ce4458:21610:21723 [1] NCCL INFO P2P Chunksize set to 131072
2edaa9ce4458:21609:21718 [0] NCCL INFO Channel 02/04 : 0 1 2
2edaa9ce4458:21609:21718 [0] NCCL INFO Channel 03/04 : 0 1 2
2edaa9ce4458:21609:21718 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1 [2] 1/-1/-1->0->-1 [3] 1/-1/-1->0->-1
2edaa9ce4458:21609:21718 [0] NCCL INFO P2P Chunksize set to 131072

2edaa9ce4458:21610:21723 [1] misc/shmutils.cc:72 NCCL WARN Error: failed to extend /dev/shm/nccl-8XI5Oy to 9637892 bytes

2edaa9ce4458:21610:21723 [1] misc/shmutils.cc:113 NCCL WARN Error while creating shared memory segment /dev/shm/nccl-8XI5Oy (size 9637888)
2edaa9ce4458:21610:21723 [1] NCCL INFO transport/shm.cc:114 -> 2
2edaa9ce4458:21610:21723 [1] NCCL INFO transport.cc:33 -> 2
2edaa9ce4458:21610:21723 [1] NCCL INFO transport.cc:113 -> 2
2edaa9ce4458:21610:21723 [1] NCCL INFO init.cc:1222 -> 2
2edaa9ce4458:21610:21723 [1] NCCL INFO init.cc:1501 -> 2
2edaa9ce4458:21610:21723 [1] NCCL INFO group.cc:64 -> 2 [Async thread]

2edaa9ce4458:21609:21718 [0] misc/shmutils.cc:72 NCCL WARN Error: failed to extend /dev/shm/nccl-9D6bNG to 9637892 bytes

2edaa9ce4458:21609:21718 [0] misc/shmutils.cc:113 NCCL WARN Error while creating shared memory segment /dev/shm/nccl-9D6bNG (size 9637888)

2edaa9ce4458:21611:21719 [2] misc/shmutils.cc:72 NCCL WARN Error: failed to extend /dev/shm/nccl-uc5H5q to 9637892 bytes
2edaa9ce4458:21609:21718 [0] NCCL INFO transport/shm.cc:114 -> 2

2edaa9ce4458:21611:21719 [2] misc/shmutils.cc:113 NCCL WARN Error while creating shared memory segment /dev/shm/nccl-uc5H5q (size 9637888)
2edaa9ce4458:21609:21718 [0] NCCL INFO transport.cc:33 -> 2
2edaa9ce4458:21609:21718 [0] NCCL INFO transport.cc:113 -> 2
2edaa9ce4458:21611:21719 [2] NCCL INFO transport/shm.cc:114 -> 2
2edaa9ce4458:21611:21719 [2] NCCL INFO transport.cc:33 -> 2
2edaa9ce4458:21611:21719 [2] NCCL INFO transport.cc:113 -> 2
2edaa9ce4458:21611:21719 [2] NCCL INFO init.cc:1222 -> 2
2edaa9ce4458:21609:21718 [0] NCCL INFO init.cc:1222 -> 2
2edaa9ce4458:21611:21719 [2] NCCL INFO init.cc:1501 -> 2
2edaa9ce4458:21609:21718 [0] NCCL INFO init.cc:1501 -> 2
2edaa9ce4458:21611:21719 [2] NCCL INFO group.cc:64 -> 2 [Async thread]
2edaa9ce4458:21609:21718 [0] NCCL INFO group.cc:64 -> 2 [Async thread]
2edaa9ce4458:21610:21610 [1] NCCL INFO group.cc:418 -> 2
2edaa9ce4458:21610:21610 [1] NCCL INFO init.cc:1876 -> 2
2edaa9ce4458:21609:21609 [0] NCCL INFO group.cc:418 -> 2
2edaa9ce4458:21609:21609 [0] NCCL INFO init.cc:1876 -> 2
2edaa9ce4458:21611:21611 [2] NCCL INFO group.cc:418 -> 2
2edaa9ce4458:21611:21611 [2] NCCL INFO init.cc:1876 -> 2
2edaa9ce4458:21609:21609 [0] NCCL INFO comm 0x558d3f0de350 rank 0 nranks 3 cudaDev 0 busId 36000 - Abort COMPLETE
2edaa9ce4458:21611:21611 [2] NCCL INFO comm 0x563ea6445da0 rank 2 nranks 3 cudaDev 2 busId 3d000 - Abort COMPLETE
2edaa9ce4458:21610:21610 [1] NCCL INFO comm 0x557947fe6110 rank 1 nranks 3 cudaDev 1 busId 39000 - Abort COMPLETE
[rank2]: Traceback (most recent call last):
[rank2]: File "/usr/local/lib/python3.10/site-packages/swift/cli/sft.py", line 5, in
[rank2]: sft_main()
[rank2]: File "/usr/local/lib/python3.10/site-packages/swift/llm/train/sft.py", line 272, in sft_main
[rank2]: return SwiftSft(args).main()
[rank2]: File "/usr/local/lib/python3.10/site-packages/swift/llm/train/sft.py", line 29, in init
[rank2]: super().init(args)
[rank2]: File "/usr/local/lib/python3.10/site-packages/swift/llm/base.py", line 18, in init
[rank2]: self.args = self._parse_args(args)
[rank2]: File "/usr/local/lib/python3.10/site-packages/swift/llm/base.py", line 27, in _parse_args
[rank2]: args, remaining_argv = parse_args(self.args_class, args)
[rank2]: File "/usr/local/lib/python3.10/site-packages/swift/utils/utils.py", line 146, in parse_args
[rank2]: args, remaining_args = parser.parse_args_into_dataclasses(argv, return_remaining_strings=True)
[rank2]: File "/usr/local/lib/python3.10/site-packages/transformers/hf_argparser.py", line 352, in parse_args_into_dataclasses
[rank2]: obj = dtype(**inputs)
[rank2]: File "", line 279, in init
[rank2]: File "/usr/local/lib/python3.10/site-packages/swift/llm/argument/train_args.py", line 154, in post_init
[rank2]: self._add_version()
[rank2]: File "/usr/local/lib/python3.10/site-packages/swift/llm/argument/train_args.py", line 196, in _add_version
[rank2]: self.output_dir = add_version_to_work_dir(self.output_dir)
[rank2]: File "/usr/local/lib/python3.10/site-packages/swift/utils/utils.py", line 127, in add_version_to_work_dir
[rank2]: dist.broadcast_object_list(obj_list)
[rank2]: File "/usr/local/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 79, in wrapper
[rank2]: return func(*args, **kwargs)
[rank2]: File "/usr/local/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2901, in broadcast_object_list
[rank2]: broadcast(object_sizes_tensor, src=src, group=group)
[rank2]: File "/usr/local/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 79, in wrapper
[rank2]: return func(*args, **kwargs)
[rank2]: File "/usr/local/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2205, in broadcast
[rank2]: work = default_pg.broadcast([tensor], opts)
[rank2]: torch.distributed.DistBackendError: NCCL error in: ../torch/csrc/distributed/c10d/NCCLUtils.hpp:275, unhandled system error (run with NCCL_DEBUG=INFO for details), NCCL version 2.20.5
[rank2]: ncclSystemError: System call (e.g. socket, malloc) or external library call failed or device error.
[rank2]: Last error:
[rank2]: Error while creating shared memory segment /dev/shm/nccl-uc5H5q (size 9637888)
[rank0]: Traceback (most recent call last):
[rank0]: File "/usr/local/lib/python3.10/site-packages/swift/cli/sft.py", line 5, in
[rank0]: sft_main()
[rank0]: File "/usr/local/lib/python3.10/site-packages/swift/llm/train/sft.py", line 272, in sft_main
[rank0]: return SwiftSft(args).main()
[rank0]: File "/usr/local/lib/python3.10/site-packages/swift/llm/train/sft.py", line 29, in init
[rank0]: super().init(args)
[rank0]: File "/usr/local/lib/python3.10/site-packages/swift/llm/base.py", line 18, in init
[rank0]: self.args = self._parse_args(args)
[rank0]: File "/usr/local/lib/python3.10/site-packages/swift/llm/base.py", line 27, in _parse_args
[rank0]: args, remaining_argv = parse_args(self.args_class, args)
[rank0]: File "/usr/local/lib/python3.10/site-packages/swift/utils/utils.py", line 146, in parse_args
[rank0]: args, remaining_args = parser.parse_args_into_dataclasses(argv, return_remaining_strings=True)
[rank0]: File "/usr/local/lib/python3.10/site-packages/transformers/hf_argparser.py", line 352, in parse_args_into_dataclasses
[rank0]: obj = dtype(**inputs)
[rank0]: File "", line 279, in init
[rank0]: File "/usr/local/lib/python3.10/site-packages/swift/llm/argument/train_args.py", line 154, in post_init
[rank0]: self._add_version()
[rank0]: File "/usr/local/lib/python3.10/site-packages/swift/llm/argument/train_args.py", line 196, in _add_version
[rank0]: self.output_dir = add_version_to_work_dir(self.output_dir)
[rank0]: File "/usr/local/lib/python3.10/site-packages/swift/utils/utils.py", line 127, in add_version_to_work_dir
[rank0]: dist.broadcast_object_list(obj_list)
[rank0]: File "/usr/local/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 79, in wrapper
[rank0]: return func(*args, **kwargs)
[rank0]: File "/usr/local/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2901, in broadcast_object_list
[rank0]: broadcast(object_sizes_tensor, src=src, group=group)
[rank0]: File "/usr/local/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 79, in wrapper
[rank0]: return func(*args, **kwargs)
[rank0]: File "/usr/local/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2205, in broadcast
[rank0]: work = default_pg.broadcast([tensor], opts)
[rank0]: torch.distributed.DistBackendError: NCCL error in: ../torch/csrc/distributed/c10d/NCCLUtils.hpp:275, unhandled system error (run with NCCL_DEBUG=INFO for details), NCCL version 2.20.5
[rank0]: ncclSystemError: System call (e.g. socket, malloc) or external library call failed or device error.
[rank0]: Last error:
[rank0]: Error while creating shared memory segment /dev/shm/nccl-9D6bNG (size 9637888)
[rank1]: Traceback (most recent call last):
[rank1]: File "/usr/local/lib/python3.10/site-packages/swift/cli/sft.py", line 5, in
[rank1]: sft_main()
[rank1]: File "/usr/local/lib/python3.10/site-packages/swift/llm/train/sft.py", line 272, in sft_main
[rank1]: return SwiftSft(args).main()
[rank1]: File "/usr/local/lib/python3.10/site-packages/swift/llm/train/sft.py", line 29, in init
[rank1]: super().init(args)
[rank1]: File "/usr/local/lib/python3.10/site-packages/swift/llm/base.py", line 18, in init
[rank1]: self.args = self._parse_args(args)
[rank1]: File "/usr/local/lib/python3.10/site-packages/swift/llm/base.py", line 27, in _parse_args
[rank1]: args, remaining_argv = parse_args(self.args_class, args)
[rank1]: File "/usr/local/lib/python3.10/site-packages/swift/utils/utils.py", line 146, in parse_args
[rank1]: args, remaining_args = parser.parse_args_into_dataclasses(argv, return_remaining_strings=True)
[rank1]: File "/usr/local/lib/python3.10/site-packages/transformers/hf_argparser.py", line 352, in parse_args_into_dataclasses
[rank1]: obj = dtype(**inputs)
[rank1]: File "", line 279, in init
[rank1]: File "/usr/local/lib/python3.10/site-packages/swift/llm/argument/train_args.py", line 154, in post_init
[rank1]: self._add_version()
[rank1]: File "/usr/local/lib/python3.10/site-packages/swift/llm/argument/train_args.py", line 196, in _add_version
[rank1]: self.output_dir = add_version_to_work_dir(self.output_dir)
[rank1]: File "/usr/local/lib/python3.10/site-packages/swift/utils/utils.py", line 127, in add_version_to_work_dir
[rank1]: dist.broadcast_object_list(obj_list)
[rank1]: File "/usr/local/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 79, in wrapper
[rank1]: return func(*args, **kwargs)
[rank1]: File "/usr/local/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2901, in broadcast_object_list
[rank1]: broadcast(object_sizes_tensor, src=src, group=group)
[rank1]: File "/usr/local/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 79, in wrapper
[rank1]: return func(*args, **kwargs)
[rank1]: File "/usr/local/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2205, in broadcast
[rank1]: work = default_pg.broadcast([tensor], opts)
[rank1]: torch.distributed.DistBackendError: NCCL error in: ../torch/csrc/distributed/c10d/NCCLUtils.hpp:275, unhandled system error (run with NCCL_DEBUG=INFO for details), NCCL version 2.20.5
[rank1]: ncclSystemError: System call (e.g. socket, malloc) or external library call failed or device error.
[rank1]: Last error:
[rank1]: Error while creating shared memory segment /dev/shm/nccl-8XI5Oy (size 9637888)
[rank0]:[W1226 16:53:50.173962133 ProcessGroupNCCL.cpp:1168] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present, but this warning has only been added since PyTorch 2.4 (function operator())
W1226 16:53:51.497000 140105620675456 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 21609 closing signal SIGTERM
W1226 16:53:51.497000 140105620675456 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 21610 closing signal SIGTERM
E1226 16:53:51.693000 140105620675456 torch/distributed/elastic/multiprocessing/api.py:833] failed (exitcode: 1) local_rank: 2 (pid: 21611) of binary: /usr/local/bin/python
Traceback (most recent call last):
File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.10/site-packages/torch/distributed/run.py", line 905, in
main()
File "/usr/local/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 348, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/distributed/run.py", line 901, in main
run(args)
File "/usr/local/lib/python3.10/site-packages/torch/distributed/run.py", line 892, in run
elastic_launch(
File "/usr/local/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 133, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/usr/local/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

/usr/local/lib/python3.10/site-packages/swift/cli/sft.py FAILED

Failures:
<NO_OTHER_FAILURES>

Root Cause (first observed failure):
[0]:
time : 2024-12-26_16:53:51
host : 2edaa9ce4458
rank : 2 (local_rank: 2)
exitcode : 1 (pid: 21611)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

==========================================================

【执行脚本】

Experimental environment: 3090 * 3

a72GB GPU memory totally

nproc_per_node=3

CUDA_VISIBLE_DEVICES=1,2,3
NPROC_PER_NODE=$nproc_per_node
swift sft
--model_type qwen
--model /app/ms-swift/model_cache/hub/qwen/Qwen1___5-7B-Chat
--dataset /app/ms-swift/datasets/datasets/NER_dataset/ccfbdci_01.jsonl
--train_type lora
--torch_dtype bfloat16
--max_length 2048
--learning_rate 1e-4
--num_train_epochs 1
--check_model_is_latest False
--output_dir output_NER
--deepspeed zero2
--gradient_accumulation_steps $(expr 16 / $nproc_per_node)
--save_steps 100
--logging_steps 10 \

@Jintao-Huang
Copy link
Collaborator

感觉 是多卡通信问题

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants