Parsimonious Side Propagation
Abstract
A fast parsimonious linear-programming-based algorithm for training neural networks is proposed that suppresses redundant features while using a minimal number of hidden units. This is achieved by propagating sideways to newly added hidden units the task of separating successive groups of unclassified points. Computational results how improvement o 26.53% and 19.76? in tenfold cross-validation test correctness over a parsimonious perceptron on two publicly available datasets.
Permanent Link
http://digital.library.wisc.edu/1793/66051Type
Technical Report
Citation
97-11