View on GitHub

Bearded-android-docs

LearningTerm

Download this project as a .zip file Download this project as a tar.gz file

Created Monday 14 July 2014

The learning term takes its name because it is the primary term in the backpropagation algorithm that adjusts the weights in the NN. There are actually multiple learning terms --- each weight has its own learning term.

Notation

Constructing the Learning Terms

The learning term for weight `w_(kj)` in layer `l` and the ith training example is:

`\ \ Delta w_(kj)^((l))[i] = eta delta_k^((l))[i] y_j^((l-1))[i]`

where `eta` is a constant called the learning parameter. `eta` controls the rate that the network "learns" i.e., the error function reaches a minimum.

The cumulative learning term for weight `w_(kj)` in layer `l` and all training examples is:

`\ \ Delta w_(kj)^((l)) = eta sum_i Delta w_(kj)^((l))[i] = eta sum_i delta_k^((l))[i] y_j^((l-1))[i]`

See also how to pick the learning parameter


Backlinks:

MachineLearning:NeuralNetworks:BackPropagation:BackPropFormula
comments powered by Disqus