Using log() and log_dict() in pl 1.9.4. #17166
Unanswered
vgthengane
asked this question in
DDP / multi-GPU / multi-node
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I need some clarification on
log_dict()
andlog()
. In single GPU mode, in my code, I was usinglog_dict()
but recently I updated the code to support DDP. When I did that I was getting a warningPossibleUserWarning: It is recommended to use 'self.log('some_val', ..., sync_dist=True)' when logging on epoch level in distributed setting to accumulate the metric across devices.
I went through some issues on GitHub all off them are usinglog()
but I have to log every metric in this case, also I found thatlog_dict
supportssync_dist=True
. Here I want to know iflog_dict
works in the same way aslog()
or differently. Also how to check iflog_dict
is accumulation metrics from all the devices? I am using pytorch_lightning==1.9.4Beta Was this translation helpful? Give feedback.
All reactions