You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello! In line 74 in layers.py you have implemented bias as follows:
out = self.act_fn(torch.matmul(self.inp, self.weights))
if self.use_bias:
out = out + self.bias
Typically, one would have the bias inside the activation function, i.e. f(wx+b) as opposed to f(wx) + b (or, alternatively, wf(x)+b, but that requires additional changes). Is there any particular reason it was done this way? Did you experiment with the standard method?
The text was updated successfully, but these errors were encountered:
Hello! In line 74 in layers.py you have implemented bias as follows:
Typically, one would have the bias inside the activation function, i.e. f(wx+b) as opposed to f(wx) + b (or, alternatively, wf(x)+b, but that requires additional changes). Is there any particular reason it was done this way? Did you experiment with the standard method?
The text was updated successfully, but these errors were encountered: