View on GitHub

Bearded-android-docs

BackPropFormula

Download this project as a .zip file Download this project as a tar.gz file

Created Wednesday 16 July 2014

Not to be confused with the backprop algorithm, the backpropagation formula (or backprop formula) is used to adjust the values of the weights in each iteration of the backprop algorithm. As per the :NeuralNetworks:BackPropagation:Motivation page, the network "learns" by minimizing the error function via incrementally adjusting the weights until the actual output is close to the expected output.

Notation

The Formula

`\ w_(kj)^((l)) = w_(kj)^((l)) + alpha w_(kj)^(**(l)) + sum_i Delta w_(kj)^((l))[i]`
`\ \ \ \ = w_(kj)^((l)) + alpha w_(kj)^(**(l)) + eta sum_i delta_k^((l))[i] y_j^((l-1))[i]`
where `alpha` is the momentum parameter and `eta` is the learning parameter.

Before you implement your own version of the backprop algorithm, check out the :NeuralNetworks:BackPropagation:Motivation and the different types of algorithms for implementation insights, tips, and gotchas.


No backlinks to this page.
comments powered by Disqus