Show simple item record

dc.contributor.authorOlvi, Mangasarian
dc.contributor.authorFung, Glenn
dc.date.accessioned2013-01-16T19:49:23Z
dc.date.available2013-01-16T19:49:23Z
dc.date.issued2000
dc.identifier.citation00-02en
dc.identifier.urihttp://digital.library.wisc.edu/1793/64282
dc.description.abstractThe problem of extracting a minimal number of data points from a large dataset, in order to generate a support vector machine (SVM) classi er, is formulated as a concave minimization problem and solved by a nite number of linear programs. This minimal set of data points, which is the smallest number of support vectors that completely characterize a separating plane classi er, is considerably smaller than that required by a standard 1-norm support vector machine with or without feature selection. The proposed approach also incorporates a feature selection procedure that results in a minimal number of input features used by the classi er. Tenfold cross validation gives as good or better test results using the proposed minimal support vector machine (MSVM) classi er based on the smaller set of data points compared to a standard 1-norm support vector machine classi er. The reduction in data points used by an MSVM classi er over those used by a 1-norm SVM classi er averaged 66% on seven public datasets and was as high as 81%. This makes MSVM a useful incremental classi cation tool which maintains only a small fraction of a large dataset before merging and processing it with new incoming data.en
dc.subjectlinear programmingen
dc.subjectconcave minimizationen
dc.subjectdata selectionen
dc.subjectdata classificationen
dc.subjectsupport vector machinesen
dc.titleData Selection for Support Vector Machine Classifiersen
dc.typeTechnical Reporten


Files in this item

Thumbnail

This item appears in the following Collection(s)

  • DMI Technical Reports
    DMI Technical Reports Archive for the Department of Computer Sciences at the University of Wisconsin-Madison

Show simple item record