-
Notifications
You must be signed in to change notification settings - Fork 120
Open
Description
I saw an implementation of the formula form. I wonder whether it is feasible to directly calculate the loss via torch.autograd operations, if I define an energy function, like the logSumExp function in this paper 'Your classifier is secretly an energy based model and you should treat it like one'.
Lines 5 to 15 in 7f27f4a
| def dsm(energy_net, samples, sigma=1): | |
| samples.requires_grad_(True) | |
| vector = torch.randn_like(samples) * sigma | |
| perturbed_inputs = samples + vector | |
| logp = -energy_net(perturbed_inputs) | |
| dlogp = sigma ** 2 * autograd.grad(logp.sum(), perturbed_inputs, create_graph=True)[0] | |
| kernel = vector | |
| loss = torch.norm(dlogp + kernel, dim=-1) ** 2 | |
| loss = loss.mean() / 2. | |
| return loss |
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels