Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Train loss drops, but val loss goes up with denoising_diffusion_pytorch_1d #149

Open
BEbillionaireUSD opened this issue Jan 10, 2023 · 2 comments

Comments

@BEbillionaireUSD
Copy link

Hey,

I use the pred_x0 mode and the default parameters. I trained the network using a time series dataset. The training loss keeps going down smoothly, while the val loss was very small at the first two epochs while it soars suddenly at the 3rd epoch. Then the val loss whipsawed instead of dropping when the training loss goes down. I really look forward to help with hyper-parameter settings and problem-solving. Thanks for any potential help in advance.

@Adrian744
Copy link

This looks like a overfitting problem. Try to increase/decrease your learning rate and batch sizes. There is no "perfect" answer, it is always trial and error.

@kirilzilla
Copy link

kirilzilla commented Jul 6, 2023

I do not see any explicit calculation or logging of a validation loss in the code, can you maybe share corresponding code? I am also looking into the 1D version. best wishes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants