Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the training of the VQ-VAE. #7

Open
LsFlyt opened this issue Nov 14, 2023 · 2 comments
Open

About the training of the VQ-VAE. #7

LsFlyt opened this issue Nov 14, 2023 · 2 comments

Comments

@LsFlyt
Copy link

LsFlyt commented Nov 14, 2023

In VQ-Font/model/VQ-VAE.ipynb.

for i in xrange(num_training_updates):
data = next(iter(train_loader))
train_data_variance = torch.var(data)
# print(train_data_variance)
# show(make_grid(data.cpu().data) )
# break
data = data - 0.5 # normalize to [-0.5, 0.5]
data = data.to(device)
optimizer.zero_grad()

The code normalize data to [-0.5, 0.5]. However, the last layer of the decoder of the VQ-VAE model is sigmoid. Is this a mistake?

@LsFlyt
Copy link
Author

LsFlyt commented Nov 17, 2023

And another question, in VQ-VAE, the data are normalized to [-0.5, 0.5], but in the training phase 2, the content image (which feeds to the content encoder) is normalized [-1, 1].

@awei669
Copy link
Owner

awei669 commented Nov 17, 2023

Sorry for the late reply, i can not remember the reason i use 'data = data - 0.5 # normalize to [-0.5, 0.5]' due to the long time. Maybe is a mistake or not.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants