
31st July 2007, 06:52 AM
|
|
Member
|
|
Join Date: Sep 2005
Posts: 1,601
|
|
Backpropagation is a supervised learning technique used for training artificial neural networks.
Essentially (i stole this from wikipeida)
Summary of the technique:- Present a training sample to the neural network.
- Compare the network's output to the desired output from that sample. Calculate the error in each output neuron.
- For each neuron, calculate what the output should have been, and a scaling factor, how much lower or higher the output must be adjusted to match the desired output. This is the local error.
- Adjust the weights of each neuron to lower the local error.
- Assign "blame" for the local error to neurons at the previous level, giving greater responsibility to neurons connected by stronger weights.
- Repeat the steps above on the neurons at the previous level, using each one's "blame" as its error.
The reason I have chosen it is that it is good for complex problems where there is a need for generalisations of the problem. All this means is it can handle similar data which may/may not come to the same result.
I guess you could say it is similar to a back rating but much more complex as it learns the rating itself.
Good Luck.
|