-
Notifications
You must be signed in to change notification settings - Fork 757
Is it safe to ignore this warning, or should I add sync_dist=True? #909
Copy link
Copy link
Open
Description
- When training and validating with multiple GPUs, is sync_dist set to False by default?
- Is there a specific place or configuration where it can be set?
- Should I add sync_dist=True to that code?”
Warning
C:\Users\software\anaconda3\envs\rf_detr\lib\site-packages\pytorch_lightning\trainer\connectors\logger_connector\result.py:433: It is recommended to use `self.log('train/loss_ce', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.
C:\Users\software\anaconda3\envs\rf_detr\lib\site-packages\pytorch_lightning\trainer\connectors\logger_connector\result.py:433: It is recommended to use `self.log('train/class_error', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.
C:\Users\software\anaconda3\envs\rf_detr\lib\site-packages\pytorch_lightning\trainer\connectors\logger_connector\result.py:433: It is recommended to use `self.log('train/loss_bbox', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.
C:\Users\software\anaconda3\envs\rf_detr\lib\site-packages\pytorch_lightning\trainer\connectors\logger_connector\result.py:433: It is recommended to use `self.log('train/loss_giou', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.
C:\Users\software\anaconda3\envs\rf_detr\lib\site-packages\pytorch_lightning\trainer\connectors\logger_connector\result.py:433: It is recommended to use `self.log('train/cardinality_error', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.
C:\Users\software\anaconda3\envs\rf_detr\lib\site-packages\pytorch_lightning\trainer\connectors\logger_connector\result.py:433: It is recommended to use `self.log('train/loss_ce_0', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.
C:\Users\software\anaconda3\envs\rf_detr\lib\site-packages\pytorch_lightning\trainer\connectors\logger_connector\result.py:433: It is recommended to use `self.log('train/loss_bbox_0', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.
C:\Users\software\anaconda3\envs\rf_detr\lib\site-packages\pytorch_lightning\trainer\connectors\logger_connector\result.py:433: It is recommended to use `self.log('train/loss_giou_0', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.
C:\Users\software\anaconda3\envs\rf_detr\lib\site-packages\pytorch_lightning\trainer\connectors\logger_connector\result.py:433: It is recommended to use `self.log('train/cardinality_error_0', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.
C:\Users\software\anaconda3\envs\rf_detr\lib\site-packages\pytorch_lightning\trainer\connectors\logger_connector\result.py:433: It is recommended to use `self.log('train/loss_ce_1', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.
C:\Users\software\anaconda3\envs\rf_detr\lib\site-packages\pytorch_lightning\trainer\connectors\logger_connector\result.py:433: It is recommended to use `self.log('train/loss_bbox_1', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.
C:\Users\software\anaconda3\envs\rf_detr\lib\site-packages\pytorch_lightning\trainer\connectors\logger_connector\result.py:433: It is recommended to use `self.log('train/loss_giou_1', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.
C:\Users\software\anaconda3\envs\rf_detr\lib\site-packages\pytorch_lightning\trainer\connectors\logger_connector\result.py:433: It is recommended to use `self.log('train/cardinality_error_1', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.
C:\Users\software\anaconda3\envs\rf_detr\lib\site-packages\pytorch_lightning\trainer\connectors\logger_connector\result.py:433: It is recommended to use `self.log('train/loss_ce_enc', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.
C:\Users\software\anaconda3\envs\rf_detr\lib\site-packages\pytorch_lightning\trainer\connectors\logger_connector\result.py:433: It is recommended to use `self.log('train/loss_bbox_enc', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.
C:\Users\software\anaconda3\envs\rf_detr\lib\site-packages\pytorch_lightning\trainer\connectors\logger_connector\result.py:433: It is recommended to use `self.log('train/loss_giou_enc', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.
C:\Users\software\anaconda3\envs\rf_detr\lib\site-packages\pytorch_lightning\trainer\connectors\logger_connector\result.py:433: It is recommended to use `self.log('train/cardinality_error_enc', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.
C:\Users\software\anaconda3\envs\rf_detr\lib\site-packages\pytorch_lightning\trainer\connectors\logger_connector\result.py:433: It is recommended to use `self.log('train/loss', ..., sync_dist=True)` when logging on epoch level in distributed setting to accumulate the metric across devices.Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels