You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Traceback (most recent call last):
File "/home/hongrui/anaconda3/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap
fn(i, *args)
File "/home/hongrui/project/metro_pro/instance-segmentation-pytorch/code/train_engine.py", line 60, in main_worker
worker.validation(epoch)
File "/home/hongrui/project/metro_pro/instance-segmentation-pytorch/code/train_worker.py", line 219, in validation
mb_out_metrics, loss, outputs = self.forward(
File "/home/hongrui/project/metro_pro/instance-segmentation-pytorch/code/train_worker.py", line 399, in forward
disc_cost = self.criterion_discriminative(
File "/home/hongrui/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/hongrui/project/metro_pro/instance-segmentation-pytorch/code/lib/losses/discriminative.py", line 180, in forward
return discriminative_loss(input, target, n_objects, max_n_objects,
File "/home/hongrui/project/metro_pro/instance-segmentation-pytorch/code/lib/losses/discriminative.py", line 147, in discriminative_loss
cluster_means = calculate_means(
File "/home/hongrui/project/metro_pro/instance-segmentation-pytorch/code/lib/losses/discriminative.py", line 20, in calculate_means
pred_masked = pred_repeated * gt_expanded
RuntimeError: CUDA out of memory. Tried to allocate 6.00 GiB (GPU 0; 39.59 GiB total capacity; 27.21 GiB already allocated; 2.05 GiB free; 35.53 GiB reserved in total by PyTorch)
Error description is shown as above.
the error emerges during validation after training is completed.
The text was updated successfully, but these errors were encountered:
Traceback (most recent call last):
File "/home/hongrui/anaconda3/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 59, in _wrap
fn(i, *args)
File "/home/hongrui/project/metro_pro/instance-segmentation-pytorch/code/train_engine.py", line 60, in main_worker
worker.validation(epoch)
File "/home/hongrui/project/metro_pro/instance-segmentation-pytorch/code/train_worker.py", line 219, in validation
mb_out_metrics, loss, outputs = self.forward(
File "/home/hongrui/project/metro_pro/instance-segmentation-pytorch/code/train_worker.py", line 399, in forward
disc_cost = self.criterion_discriminative(
File "/home/hongrui/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/hongrui/project/metro_pro/instance-segmentation-pytorch/code/lib/losses/discriminative.py", line 180, in forward
return discriminative_loss(input, target, n_objects, max_n_objects,
File "/home/hongrui/project/metro_pro/instance-segmentation-pytorch/code/lib/losses/discriminative.py", line 147, in discriminative_loss
cluster_means = calculate_means(
File "/home/hongrui/project/metro_pro/instance-segmentation-pytorch/code/lib/losses/discriminative.py", line 20, in calculate_means
pred_masked = pred_repeated * gt_expanded
RuntimeError: CUDA out of memory. Tried to allocate 6.00 GiB (GPU 0; 39.59 GiB total capacity; 27.21 GiB already allocated; 2.05 GiB free; 35.53 GiB reserved in total by PyTorch)
Error description is shown as above.
the error emerges during validation after training is completed.
The text was updated successfully, but these errors were encountered: