An Introduction to Support Vector Machines and Other Kernel-based Learning MethodsCambridge University Press, 23.3.2000 This is the first comprehensive introduction to Support Vector Machines (SVMs), a generation learning system based on recent advances in statistical learning theory. SVMs deliver state-of-the-art performance in real-world applications such as text categorisation, hand-written character recognition, image classification, biosequences analysis, etc., and are now established as one of the standard tools for machine learning and data mining. Students will find the book both stimulating and accessible, while practitioners will be guided smoothly through the material required for a good grasp of the theory and its applications. The concepts are introduced gradually in accessible and self-contained stages, while the presentation is rigorous and thorough. Pointers to relevant literature and web sites containing software ensure that it forms an ideal starting point for further study. Equally, the book and its associated web site will guide practitioners to updated literature, new applications, and on-line software. |
Kirjan sisältä
Tulokset 6 - 10 kokonaismäärästä 71
Sivu 9
... linear functions are the best understood and simplest to apply. Traditional statistics and the classical neural networks literature have developed many methods for discriminating between two classes of instances using linear functions ...
... linear functions are the best understood and simplest to apply. Traditional statistics and the classical neural networks literature have developed many methods for discriminating between two classes of instances using linear functions ...
Sivu 10
... function and the decision rule is given by sgn(/(x)), where we will use the ... equation (w - \) + b = 0 (see Figure 2.1). A hyperplane is an affine subspace ... linear discriminants and percep- trons. The theory of linear discriminants ...
... function and the decision rule is given by sgn(/(x)), where we will use the ... equation (w - \) + b = 0 (see Figure 2.1). A hyperplane is an affine subspace ... linear discriminants and percep- trons. The theory of linear discriminants ...
Sivu 11
... functions were introduced in the 1960s for separating points from two ... linear classifications is the procedure proposed by Frank Rosenblatt in 1956 ... Linear Classification 11 Rosenblatt's Perceptron.
... functions were introduced in the 1960s for separating points from two ... linear classifications is the procedure proposed by Frank Rosenblatt in 1956 ... Linear Classification 11 Rosenblatt's Perceptron.
Sivu 12
... linear function ( TOf w' ir^f ^ ) , which therefore measures the Euclidean distances of the points from the decision boundary in the input space. Finally, the margin of a training set S is the maximum geometric margin over all ...
... linear function ( TOf w' ir^f ^ ) , which therefore measures the Euclidean distances of the points from the decision boundary in the input space. Finally, the margin of a training set S is the maximum geometric margin over all ...
Sivu 15
... linear function, as a positive rescaling of both weights and bias does not change its classification. Later we will use this fact to define the canonical maximal margin hyperplane with respect to a separable training set by fixing the ...
... linear function, as a positive rescaling of both weights and bias does not change its classification. Later we will use this fact to define the canonical maximal margin hyperplane with respect to a separable training set by fixing the ...
Sisältö
1 | |
9 | |
KernelInduced Feature Spaces | 26 |
Generalisation Theory | 52 |
Optimisation Theory | 79 |
Support Vector Machines | 93 |
Implementation Techniques | 125 |
Applications of Support Vector Machines | 149 |
A Pseudocode for the SMO Algorithm | 162 |
References | 173 |
Index | 187 |
Muita painoksia - Näytä kaikki
An Introduction to Support Vector Machines and Other Kernel-based Learning ... Nello Cristianini,John Shawe-Taylor Rajoitettu esikatselu - 2000 |
An Introduction to Support Vector Machines and Other Kernel-based Learning ... Nello Cristianini,John Shawe-Taylor Esikatselu ei käytettävissä - 2000 |
Yleiset termit ja lausekkeet
1-norm soft margin algorithm analysis applied approach Bayesian bias bound Chapter choice classification computational consider constraints convergence convex corresponding datasets Definition described dual problem dual representation fat-shattering dimension feasibility gap feature mapping feature space finite Gaussian processes generalisation error geometric margin given Hence heuristics high dimensional Hilbert space hyperplane hypothesis inequality inner product space input space introduced iterative Karush-Kuhn-Tucker kernel function kernel matrix Lagrange multipliers Lagrangian learning algorithm linear functions linear learning machines loss function machine learning margin distribution margin slack vector maximal margin hyperplane maximise minimise norm objective function obtained on-line optimisation problem parameters perceptron perceptron algorithm performance positive semi-definite primal and dual quantity random examples real-valued function Remark result ridge regression Section sequence slack variables soft margin optimisation solution solve subset Support Vector Machines SVMs techniques Theorem training data training examples training points training set update Vapnik VC dimension weight vector zero