Show simple item record

dc.contributor.authorSolodov, Mikhail
dc.contributor.authorMangasarian, Olvi
dc.date.accessioned2013-01-25T19:44:22Z
dc.date.available2013-01-25T19:44:22Z
dc.date.issued1994
dc.identifier.citation94-06en
dc.identifier.urihttp://digital.library.wisc.edu/1793/64530
dc.description.abstractThe fundamental backpropagation (BP) algorithm for training artificial neural networks is cast as a deterministic nonmonotone perturbed gradient method. Under certain natural assumptions, such as the series of learning rates diverging while the series of their squares converging, it is established that every accumulation point of the online BP iterates is a stationary point of the BP error function. The result presented cover serial and parallel online BP, modified BP with a momentum term, and BP with weight decayen
dc.subjectbackpropagation convergenceen
dc.titleBackpropagation Convergence Via Deterministic Nonmonotone Perturbed Minimizationen
dc.typeTechnical Reporten


Files in this item

Thumbnail

This item appears in the following Collection(s)

  • Math Prog Technical Reports
    Math Prog Technical Reports Archive for the Department of Computer Sciences at the University of Wisconsin-Madison

Show simple item record