Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bias convention #2

Open
bjornvz opened this issue Nov 30, 2023 · 0 comments
Open

Bias convention #2

bjornvz opened this issue Nov 30, 2023 · 0 comments

Comments

@bjornvz
Copy link

bjornvz commented Nov 30, 2023

Hello! In line 74 in layers.py you have implemented bias as follows:

    out = self.act_fn(torch.matmul(self.inp, self.weights))
    if self.use_bias:
        out = out + self.bias

Typically, one would have the bias inside the activation function, i.e. f(wx+b) as opposed to f(wx) + b (or, alternatively, wf(x)+b, but that requires additional changes). Is there any particular reason it was done this way? Did you experiment with the standard method?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant