Improved Generalization via Tolerant Training
Abstract
Theoretical and computational justification is given for improved generalization when the training set is learned with less accuracy. The model used for this investigation is a simple linear one. It is shown that learning a training set with a tolerance T improves generalization, over zero-tolerance training, for any testing set satisfying a certain closeness condition to the training set. These results, obtained via a mathematical programming formulation, are placed in the context of some well-known machine linear systems (including nine of the twelve real-world data sets tested), as well as for nonlinear systems such as neural networks for which no theoretical results are available at present. In particular, the tolerant training metod improves generalization on noisy, sparse, and over-parametrized problems.
Subject
generalization
function approximation
inductive learning
Permanent Link
http://digital.library.wisc.edu/1793/65030Type
Technical Report
Citation
95-11