An I ntroduction to Support Vector Machines and other kernel-based learning methods
Support Vector Machines (SVMs) are a new generation of classification method. Derived from well principled Statistical Learning theory, this method attempts to produce boundaries between classes by both minimising the empirical error from the training set and also controlling the complexity of the decision boundary, which can be non-linear. SVMs use a kernel matrix to transform a non-linear separation problem in input space to a linear separation problem in feature space. Common kernels include the Radial Basis Function, Polynomial and Sigmoidal Functions. In many simulated studies and real applications, SVMs show superior generalisation performance compared to traditional classification methods. SVMs also provide several useful statistics that can be used for both model selection and feature selection because these statistics are the upper bounds of the generalisation performance estimation of Leave-One-Out Cross-Validation. SVMs can be employed for multiclass problems in addition to the traditional two class application. Various approaches include one-class classifiers, one-against-one, one-against-all and DAG (Directed Acyclic Graph) trees. Methods for feature selection include RFE (Recursive Feature Elimination) and Gradient Descent based approaches.
|79227||515.63 CRI A||Library Lantai 3||Tersedia|
Tidak tersedia versi lain