Skip to content

SGD #8

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
i-salameh95 opened this issue Dec 17, 2023 · 1 comment
Open

SGD #8

i-salameh95 opened this issue Dec 17, 2023 · 1 comment

Comments

@i-salameh95
Copy link

Hello,
I have used this package, its awesome
but I think there is an issue with NN.prototype.backPropagate
in gradient descent u should use the derivative of the activation function of the hidden layer, but in the code u don't use differentiate at all from the file activation, it seems that u put the derivative of sigmoid, for both hidden and output layer.... which is not always nice.

take an example like x^2 : if u train it on the settings u provided it will fail, because we need the output to be as it is ..

@i-salameh95
Copy link
Author

this is the updated version:
linear for output layer and activation from settings for the hidden layer

 NN.prototype.backPropagate = function (desiredOutput) {
        var self = this
        this.errorSigs = [];

        var outputLayerIndex = this.outputs.length - 1;
        var outputLayer = this.outputs[outputLayerIndex];
        var prevLayerOutput = this.outputs[outputLayerIndex - 1];

        var numOutputNodes = outputLayer.length

        this.errorSigs[outputLayerIndex] = [];

        // initialize changes array if needed
        if (!this.changes[outputLayerIndex])
            this.initializeChanges();

        // update weights for output layer
        for (var n = 0; n < numOutputNodes; n++) {

            var desiredOut = desiredOutput[n];
            var neuronOut = outputLayer[n];
            var neuronError = desiredOut - neuronOut;
            // determine error signal value
            var errorSig = neuronError; // this is updated by israa
            self.errorSigs[outputLayerIndex][n] = errorSig

            // update neuron connection weights
            for (var p = 0; p < prevLayerOutput.length; p++) {
                var change = self.changes[outputLayerIndex][n][p];
                var weightDelta = self.opts.learningRate * errorSig * prevLayerOutput[p];

                change = weightDelta + (self.opts.momentum * change)

                //console.log('L%s:N%s neuronError %s desiredOut %s, neuronOut %s, errorSig: %s, p: %s, change for p: %s', outputLayerIndex, n, neuronError, desiredOut, neuronOut, errorSig, p, change)

                this.weights[outputLayerIndex][n][p] += change
                this.changes[outputLayerIndex][n][p] = change
            }

            // update neuron bias
            var biasDelta = self.opts.learningRate * errorSig

            this.biases[outputLayerIndex][n] += biasDelta
        }

        var lastHiddenLayerNum = outputLayerIndex - 1

        // iterate backwords thru the rest of the hidden layers
        for (var layer = lastHiddenLayerNum; layer > 0; layer--) {
            var prevLayerOutput = this.outputs[layer - 1];
            var nextLayerSize = this.outputs[layer + 1].length;

            this.errorSigs[layer] = [];

            // determine error of each neuron's output
            for (var n = 0; n < this.outputs[layer].length; n++) {
                var neuronOut = this.outputs[layer][n];

                // determine weighted sum of next layer's errorSigs
                var nextLayerErrorSum = 0

                // determine errors for each connection to this neuron
                for (var d = 0; d < nextLayerSize; d++) {
                    nextLayerErrorSum += this.errorSigs[layer + 1][d] * (self.weights[layer + 1][d][n] || 0)
                }

                // determine error sig value for this neuron
                var errorSig = nextLayerErrorSum * activationFns[this.opts.activation].differentiate(this.netInput[layer][n]); // updated

                this.errorSigs[layer][n] = errorSig

                // update neuron connection weights
                for (var p = 0; p < prevLayerOutput.length; p++) {
                    var change = this.changes[layer][n][p];
                    var weightDelta = this.opts.learningRate * errorSig * prevLayerOutput[p];

                    change = weightDelta + (this.opts.momentum * change)

                    //console.log('change hidden layer L%s:N%s:p:%s : %s', layer, n, p, change)

                    this.weights[layer][n][p] += change;
                    this.changes[layer][n][p] = change;
                }

                // update neuron bias
                this.biases[layer][n] += this.opts.learningRate * errorSig
            }
        }
    } 
    
    
    

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant