Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Projection back to the range step #3

Open
1094442522 opened this issue Sep 19, 2021 · 4 comments
Open

Projection back to the range step #3

1094442522 opened this issue Sep 19, 2021 · 4 comments

Comments

@1094442522
Copy link

Hi, I read the paper and find there is a 'projection back to the range step' in Algorithm 1. Is this step implemented in the ilo_stylegan.py?

From my understanding of the code, there is a projection to the l1 ball neighbourhood of prev_gen_out, but I don't find the step z_p ← G1(z_k) in the code (the step 6 in Algorithm1). I am wondering if there is something wrong with my understanding.

Thanks for your help and attached is the algorithm 1.
image

@akashsharma02
Copy link

akashsharma02 commented Apr 11, 2022

@1094442522 Did you figure this out? @giannisdaras It would be really helpful to get your clarification on this?

@ffhibnese
Copy link

ffhibnese commented Sep 26, 2022

I‘ve got the same question as you did...I doubt that the author didn't implement line5,6 in the ilo_stylegan.py @1094442522 @akashsharma02

@giannisdaras
Copy link
Owner

The following code projects back to the l1-ball from the solution of the previous layer:

deviation = project_onto_l1_ball(self.gen_outs[-1] - prev_gen_out,

If the solution of the prev. layer lies within the range of the layer (which is definitely the case when you optimize in the first intermediate layer), you are guaranteed to stay in an l1-deviation from the range.

Is this answering what you guys are asking? Thanks for your interest!

@ffhibnese
Copy link

ffhibnese commented Sep 26, 2022

The following code projects back to the l1-ball from the solution of the previous layer:

deviation = project_onto_l1_ball(self.gen_outs[-1] - prev_gen_out,

If the solution of the prev. layer lies within the range of the layer (which is definitely the case when you optimize in the first intermediate layer), you are guaranteed to stay in an l1-deviation from the range.

Is this answering what you guys are asking? Thanks for your interest!

You wrote "This problem is solved by initializing a latent vector $z^p$ to $\hat{z}^p$ and then minimizing using gradient descent the loss $||G_1(z^k) − \tilde{z}_p||$" in the paper, namely the 5th line of Algorithm 1. But I don't find any inplementation of this. In my opinion, the code just simplly project present vector into an l1-deviation ball.

Thanks for answering my question! And I would be grateful if you help me figure this out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants