Neural networks - why so many learning rules? -


I am starting a neural network, currently at least three (different?) Learning Rules (Hebeşi, Delta Rules, Backpropages) are presented in the subject of supervised learning.

I'm missing something, but if the goal is to reduce the error, only error (whole_set_off_wits) ?

Edit: I have to answer not only to implement the shielded lineage, it would be useful if someone could point out the actual difference between those methods, And the difference between them may be due to the descendants of the straight shield.

To know this, the rules of learning are accounted to layered structure of the network. On the other hand, the entire weight set completely ignores it for finding the least error (w) . How is it fit in?

One question is how to break "defect" for any error. Classic delta rule or LMS rule is Essentially gradient lineage when you apply the delta rule to a multi-level network, you receive backpop.

"Why?" On your specific question Not only the shield dynasty? "The population of the shield can work for some problems, but many problems are Local Minma, which will be trapped in the naive shield dynasty, for this the initial reaction is to add a" speed "word so that you can" roll out "the local minimum That the classic backprop algorithm is too much


Comments

Popular posts from this blog

oracle - The fastest way to check if some records in a database table? -

php - multilevel menu with multilevel array -

jQuery UI: Datepicker month format -