Multi-GPU Inference #11019
Unanswered
angadkalra
asked this question in
DDP / multi-GPU / multi-node
Multi-GPU Inference
#11019
Replies: 1 comment 1 reply
-
Dear @angadkalra, You could rely on all_gather to send the batches across, but there aren't the best practices as it might be costly and quite error-prone. An even better solution would be to write down the predictions to a database with a file lock for each process to write on. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Is it possible to do DDP multi-gpu inference and write to a file at end? I tried and I only get the outputs of one GPU in the file. Is there a way to all gather at end of test epoch and write all the output_dicts returned at end of every step to a single .pkl file?
Beta Was this translation helpful? Give feedback.
All reactions