Skip to content

Distributed training on Cloud TPUs stuck at 0% #2455

Answered by muhd-umer
muhd-umer asked this question in Q&A
Discussion options

You must be logged in to vote

It seems that the Colab TPU drivers were the cause of this issue, and not the code itself. Since distributed training is working as expected, I'm marking this as answered.

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by muhd-umer
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
1 participant
Converted from issue

This discussion was converted from issue #2453 on September 12, 2022 14:21.