Setting random seed before training #9268
Answered
by
awaelchli
w2kun
asked this question in
DDP / multi-GPU / multi-node
-
Hi, Do I need to manually set the random seed to ensure the synchronization of state (e.g. initialized parameters) across processes when using distributed training? |
Beta Was this translation helpful? Give feedback.
Answered by
awaelchli
Sep 4, 2021
Replies: 1 comment
-
No, parameters get broadcast on the first time the model performs forward. This is implemented in the PyTorch DistributedDataParallel wrapper and so applies to Lightning as well. |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
w2kun
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
No, parameters get broadcast on the first time the model performs forward. This is implemented in the PyTorch DistributedDataParallel wrapper and so applies to Lightning as well.