You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for sharing your codes and I do believe your work is pretty good.
I ran your codes on a multi-class dataset and I want to do multi category generation.
And I didn't modify your total framework and loss calculations, while the network was changed a little to process the categorial vector in order to make the latent vector z contain categorial information.
But after 4 or 5 epochs of training, the reconstruction loss became negative. According to the equation 23, 26 of http://arxiv.org/abs/1308.0850, if pi * Norm is bigger than 1, than the log value would be positive and the loss would be negative according to
result1 = -torch.log(result1 + epsilon).
But, Norm is the possibility density of a bivariate Gaussian, Norm ∈[0,1], and pi ∈(0,1).
So how can I handle this, could you please give me some advice?
By the way, I don't know if it may caused by the iteration of training because there are over 400K sketches in the training dataset, so one epoch may contains over 4K g_steps.
The text was updated successfully, but these errors were encountered:
Thanks for sharing your codes and I do believe your work is pretty good.
I ran your codes on a multi-class dataset and I want to do multi category generation.
And I didn't modify your total framework and loss calculations, while the network was changed a little to process the categorial vector in order to make the latent vector z contain categorial information.
But after 4 or 5 epochs of training, the reconstruction loss became negative. According to the equation 23, 26 of http://arxiv.org/abs/1308.0850, if pi * Norm is bigger than 1, than the log value would be positive and the loss would be negative according to
result1 = -torch.log(result1 + epsilon).
But, Norm is the possibility density of a bivariate Gaussian, Norm ∈[0,1], and pi ∈(0,1).
So how can I handle this, could you please give me some advice?
By the way, I don't know if it may caused by the iteration of training because there are over 400K sketches in the training dataset, so one epoch may contains over 4K g_steps.
The text was updated successfully, but these errors were encountered: