-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PMR #15
Comments
I'm pretty sure the gradients are still propagated through the int. Notice before converting to an int, the z-values are multiplied by 1000 so that the z-axis resolution--the one that we care about for gradient propagation -- is less than 1mm. The x and y axes turn to a scale of "1" int increments = "1" taxel up/down or left/right. The reason they are converted to ints is so that I can use the You should be able to check by running a couple epochs as the README suggests and zeroing all of the losses except for the PMR loss. as long as the loss function trends down (and it should) then it's propagating gradients. -Henry |
I assume you want to get this method working with the larger pressure array size (i.e. 33x68)? if you get stuck on this let me know and I'll see if I can fix it. I don't want my messy code to block you. -Henry |
Hi Henry, I'll try to run the network on just PMR loss. But I was testing on an example of type casting (added script below) but the code breaks in doing the backprop. import torch
x = torch.rand(2, 3) * 10 + 1
x.requires_grad = True
print (x, x.requires_grad, x.grad)
x_int = x.type(torch.LongTensor)
print (x_int, x_int.requires_grad)
gt = torch.ones((2, 3))
criterion = torch.nn.L1Loss()
loss = criterion(x_int, gt)
print (loss)
loss.backward()
print (x.grad) It produces this error:
Do let me know your thoughts on this. |
Hello Henry,
Thank you for assisting in understanding the paper better.
I am a bit confused on how pressure loss is being applied on Mod2 of the BPWNet model.
After going through the paper, I understand the loss flow as follows:
But in checking the code for PMR, you do an int conversion on the verts_taxel part (L555 - mesh_depth_lib), which is non differentiable.
Since the output of PMR module depends on verts_taxels_int which does not have gradient, how are you sending gradient back to verts_taxel and then the model?
Best,
Abhishek
The text was updated successfully, but these errors were encountered: