Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hello #9

Open
ZW-PRO opened this issue Oct 16, 2021 · 11 comments
Open

hello #9

ZW-PRO opened this issue Oct 16, 2021 · 11 comments

Comments

@ZW-PRO
Copy link

ZW-PRO commented Oct 16, 2021

请问你训练了多长时间,谢谢

@JingyunLiang
Copy link
Owner

On a single V100, training times of different stages are: stage1 (18 hours), stage2 (2 days), stage3 (15 hours).

@ZW-PRO
Copy link
Author

ZW-PRO commented Oct 16, 2021 via email

@JingyunLiang
Copy link
Owner

I didn't have such a problem. Is the code running? Maybe you can increase the batch size and then reduce the training iterations.

@ZW-PRO
Copy link
Author

ZW-PRO commented Oct 17, 2021 via email

@JingyunLiang
Copy link
Owner

Yes, it is possible. I remember that my GPU-Util (from nvidia-smi) is fair enough. Maybe your can try to use pin_memory=True and use larger n_worker in the data loader.

@jiandandan001
Copy link

代码是在训练的,我在想会不会是有大量时间在读取训练图片,导致显卡利用率不高。我的训练集就是800张DIV2K,没有LR,请问这正确吗

---Original--- From: "Jingyun @.> Date: Sun, Oct 17, 2021 00:58 AM To: @.>; Cc: @.@.>; Subject: Re: [JingyunLiang/MANet] hello (#9) I didn't have such a problem. Is the code running? Maybe you can increase the batch size and then reduce the training iterations. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android.

你好。请问你解决这个问题了吗? 我也有类似问题。

@ZW-PRO
Copy link
Author

ZW-PRO commented Oct 27, 2021 via email

@ZW-PRO
Copy link
Author

ZW-PRO commented Nov 3, 2021

代码是在训练的,我在想会不会是有大量时间在读取训练图片,导致显卡利用率不高。我的训练集就是800张DIV2K,没有LR,请问这正确吗

---原创--- 来自:“景云_@_ . **> 日期:2021 年 10 月 17 日上午 00:58 致:@_ ._ >;抄送:_@_ . _@ .**_ >;主题:回复:[JingyunLiang/MANet]你好(#9)我没有这样的问题。代码在运行吗?也许你可以增加batch size,然后减少训练迭代。-你收到这个是因为你创作了这个线程。直接回复这封邮件,在GitHub上查看,或取消订阅。使用适用于 iOS 或 Android 的 GitHub Mobile 随时随地分类通知。

你好。请问你解决了这个问题吗?我也有类似的问题。

你好,你后续有这个问题的解决方案吗

@ZW-PRO
Copy link
Author

ZW-PRO commented Nov 4, 2021

Yes, it is possible. I remember that my GPU-Util (from nvidia-smi) is fair enough. Maybe your can try to use pin_memory=True and use larger n_worker in the data loader.

这些都无法解决问题,还是有长时间显卡处于0利用率的情况.

@ByChelsea
Copy link

我也遇到这个问题了。因为实时生成LQ和模糊核非常非常花时间。我试过把模糊核存成csv,然后读出来,但是读取192x192x441的模糊核也非常非常花时间,甚至还不如实时生成...
我没想到解决办法...

@wwlCape
Copy link

wwlCape commented Apr 9, 2023

我也遇到这个问题了。因为实时生成LQ和模糊核非常非常花时间。我试过把模糊核存成csv,然后读出来,但是读取192x192x441的模糊核也非常非常花时间,甚至还不如实时生成... 我没想到解决办法...

可以跟您讨论一下这篇论文么,我的邮箱wenwlmail.163.com,谢谢

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants