-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hello #9
Comments
On a single V100, training times of different stages are: stage1 (18 hours), stage2 (2 days), stage3 (15 hours). |
我在一张V100复现,训练stage1,观察到显卡利用率很低,有长时间为0,有解决方案吗
…---Original---
From: "Jingyun ***@***.***>
Date: Sat, Oct 16, 2021 22:08 PM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [JingyunLiang/MANet] hello (#9)
On a single V100, training times of different stages are: stage1 (18 hours), stage2 (2 days), stage3 (15 hours).
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
|
I didn't have such a problem. Is the code running? Maybe you can increase the batch size and then reduce the training iterations. |
代码是在训练的,我在想会不会是有大量时间在读取训练图片,导致显卡利用率不高。我的训练集就是800张DIV2K,没有LR,请问这正确吗
…---Original---
From: "Jingyun ***@***.***>
Date: Sun, Oct 17, 2021 00:58 AM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [JingyunLiang/MANet] hello (#9)
I didn't have such a problem. Is the code running? Maybe you can increase the batch size and then reduce the training iterations.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
|
Yes, it is possible. I remember that my GPU-Util (from nvidia-smi) is fair enough. Maybe your can try to use pin_memory=True and use larger n_worker in the data loader. |
你好。请问你解决这个问题了吗? 我也有类似问题。 |
没有
…---Original---
From: ***@***.***>
Date: Wed, Oct 27, 2021 17:05 PM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [JingyunLiang/MANet] hello (#9)
代码是在训练的,我在想会不会是有大量时间在读取训练图片,导致显卡利用率不高。我的训练集就是800张DIV2K,没有LR,请问这正确吗
…
---Original--- From: "Jingyun @.> Date: Sun, Oct 17, 2021 00:58 AM To: @.>; Cc: @.@.>; Subject: Re: [JingyunLiang/MANet] hello (#9) I didn't have such a problem. Is the code running? Maybe you can increase the batch size and then reduce the training iterations. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android.
你好。请问你解决这个问题了吗? 我也有类似问题。
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
|
你好,你后续有这个问题的解决方案吗 |
这些都无法解决问题,还是有长时间显卡处于0利用率的情况. |
我也遇到这个问题了。因为实时生成LQ和模糊核非常非常花时间。我试过把模糊核存成csv,然后读出来,但是读取192x192x441的模糊核也非常非常花时间,甚至还不如实时生成... |
可以跟您讨论一下这篇论文么,我的邮箱wenwlmail.163.com,谢谢 |
请问你训练了多长时间,谢谢
The text was updated successfully, but these errors were encountered: