You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I have trained the VQVAE network on my own dataset comprise of 10,000 images of 64×64 pixels without any labels. In order to train PixelCNN network, I faked some labels like this: label_set=torch.zeros((10000,1), dtype=torch.int64)
However, the shape of my faked labels seems not to fit in the code. In modules.py, there is this line out_v = self.gate(h_vert + h[:, :, None, None]) in GatedMaskedConv2d.forward, where h is the label. In this way, the shape of h_vert would be (batch, 2×dim, 16, 16), but the shape of h would be (batch, 1, 2×dim).
So can anyone tell me how to deal with the labels?
Thanks.
The text was updated successfully, but these errors were encountered:
Hi,
I have trained the VQVAE network on my own dataset comprise of 10,000 images of 64×64 pixels without any labels. In order to train PixelCNN network, I faked some labels like this:
label_set=torch.zeros((10000,1), dtype=torch.int64)
However, the shape of my faked labels seems not to fit in the code. In modules.py, there is this line
out_v = self.gate(h_vert + h[:, :, None, None])
in GatedMaskedConv2d.forward, where h is the label. In this way, the shape of h_vert would be (batch, 2×dim, 16, 16), but the shape of h would be (batch, 1, 2×dim).So can anyone tell me how to deal with the labels?
Thanks.
The text was updated successfully, but these errors were encountered: