Backpropagation Convergence Via Deterministic Nonmonotone Perturbed Minimization
dc.contributor.author | Solodov, Mikhail | |
dc.contributor.author | Mangasarian, Olvi | |
dc.date.accessioned | 2013-01-25T19:44:22Z | |
dc.date.available | 2013-01-25T19:44:22Z | |
dc.date.issued | 1994 | |
dc.identifier.citation | 94-06 | en |
dc.identifier.uri | http://digital.library.wisc.edu/1793/64530 | |
dc.description.abstract | The fundamental backpropagation (BP) algorithm for training artificial neural networks is cast as a deterministic nonmonotone perturbed gradient method. Under certain natural assumptions, such as the series of learning rates diverging while the series of their squares converging, it is established that every accumulation point of the online BP iterates is a stationary point of the BP error function. The result presented cover serial and parallel online BP, modified BP with a momentum term, and BP with weight decay | en |
dc.subject | backpropagation convergence | en |
dc.title | Backpropagation Convergence Via Deterministic Nonmonotone Perturbed Minimization | en |
dc.type | Technical Report | en |
Files in this item
This item appears in the following Collection(s)
-
Math Prog Technical Reports
Math Prog Technical Reports Archive for the Department of Computer Sciences at the University of Wisconsin-Madison