• Login
    View Item 
    •   MINDS@UW Home
    • MINDS@UW Madison
    • College of Letters and Science, University of Wisconsin–Madison
    • Department of Computer Sciences, UW-Madison
    • Math Prog Technical Reports
    • View Item
    •   MINDS@UW Home
    • MINDS@UW Madison
    • College of Letters and Science, University of Wisconsin–Madison
    • Department of Computer Sciences, UW-Madison
    • Math Prog Technical Reports
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Backpropagation Convergence Via Deterministic Nonmonotone Perturbed Minimization

    Thumbnail
    File(s)
    Backpropagation Convergence Via Deterministic Nonmonotone Perturbed Minimization (129.8Kb)
    Date
    1994
    Author
    Solodov, Mikhail
    Mangasarian, Olvi
    Metadata
    Show full item record
    Abstract
    The fundamental backpropagation (BP) algorithm for training artificial neural networks is cast as a deterministic nonmonotone perturbed gradient method. Under certain natural assumptions, such as the series of learning rates diverging while the series of their squares converging, it is established that every accumulation point of the online BP iterates is a stationary point of the BP error function. The result presented cover serial and parallel online BP, modified BP with a momentum term, and BP with weight decay
    Subject
    backpropagation convergence
    Permanent Link
    http://digital.library.wisc.edu/1793/64530
    Citation
    94-06
    Part of
    • Math Prog Technical Reports

    Contact Us | Send Feedback
     

     

    Browse

    All of MINDS@UWCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

    My Account

    LoginRegister

    Contact Us | Send Feedback