An Introduction to Support Vector Machines and Other Kernel-based Learning MethodsCambridge University Press, 23.3.2000 This is the first comprehensive introduction to Support Vector Machines (SVMs), a generation learning system based on recent advances in statistical learning theory. SVMs deliver state-of-the-art performance in real-world applications such as text categorisation, hand-written character recognition, image classification, biosequences analysis, etc., and are now established as one of the standard tools for machine learning and data mining. Students will find the book both stimulating and accessible, while practitioners will be guided smoothly through the material required for a good grasp of the theory and its applications. The concepts are introduced gradually in accessible and self-contained stages, while the presentation is rigorous and thorough. Pointers to relevant literature and web sites containing software ensure that it forms an ideal starting point for further study. Equally, the book and its associated web site will guide practitioners to updated literature, new applications, and on-line software. |
Kirjan sisältä
Tulokset 1 - 5 kokonaismäärästä 25
Sivu v
... Perceptron 11 2.1.2 Other Linear Classifiers 19 2.1.3 Multi-class Discrimination 20 2.2 Linear Regression 20 2.2.1 Least Squares 21 2.2.2 Ridge Regression 22 2.3 Dual Representation of Linear Machines 24 2.4 Exercises 25 2.5 Further ...
... Perceptron 11 2.1.2 Other Linear Classifiers 19 2.1.3 Multi-class Discrimination 20 2.2 Linear Regression 20 2.2.1 Least Squares 21 2.2.2 Ridge Regression 22 2.3 Dual Representation of Linear Machines 24 2.4 Exercises 25 2.5 Further ...
Sivu 8
... perceptron [122] contained many of the features of the systems discussed in the next chapter. In particular, the idea of modelling learning problems as problems of search in a suitable hypothesis space is characteristic of the ...
... perceptron [122] contained many of the features of the systems discussed in the next chapter. In particular, the idea of modelling learning problems as problems of search in a suitable hypothesis space is characteristic of the ...
Sivu 10
... perceptrons in the early 1960s, mainly due to the work of Rosenblatt. We will refer to the quantities w and b as the weight vector and bias, terms borrowed from the neural networks literature. Sometimes — b is replaced by 9, a quantity ...
... perceptrons in the early 1960s, mainly due to the work of Rosenblatt. We will refer to the quantities w and b as the weight vector and bias, terms borrowed from the neural networks literature. Sometimes — b is replaced by 9, a quantity ...
Sivu 11
... Perceptron The first iterative algorithm for learning linear classifications is the procedure proposed by Frank Rosenblatt in 1956 for the perceptron. The algorithm created a great deal of interest when it was first introduced. It is an ...
... Perceptron The first iterative algorithm for learning linear classifications is the procedure proposed by Frank Rosenblatt in 1956 for the perceptron. The algorithm created a great deal of interest when it was first introduced. It is an ...
Sivu 12
... perceptron algorithm on S is at most 2Kx2 12 2 Linear Learning Machines. Given a linearly separable training set S and learning rate n gR+ w0^0 ; b0 *- 0; k <- 0 R <- maxi<,^ ||x,-|| repeat for i=l to / if y.((wfc - x,> + bk) < 0 then ...
... perceptron algorithm on S is at most 2Kx2 12 2 Linear Learning Machines. Given a linearly separable training set S and learning rate n gR+ w0^0 ; b0 *- 0; k <- 0 R <- maxi<,^ ||x,-|| repeat for i=l to / if y.((wfc - x,> + bk) < 0 then ...
Sisältö
1 | |
9 | |
KernelInduced Feature Spaces | 26 |
Generalisation Theory | 52 |
Optimisation Theory | 79 |
Support Vector Machines | 93 |
Implementation Techniques | 125 |
Applications of Support Vector Machines | 149 |
A Pseudocode for the SMO Algorithm | 162 |
References | 173 |
Index | 187 |
Muita painoksia - Näytä kaikki
An Introduction to Support Vector Machines and Other Kernel-based Learning ... Nello Cristianini,John Shawe-Taylor Rajoitettu esikatselu - 2000 |
An Introduction to Support Vector Machines and Other Kernel-based Learning ... Nello Cristianini,John Shawe-Taylor Esikatselu ei käytettävissä - 2000 |
Yleiset termit ja lausekkeet
1-norm soft margin algorithm analysis applied approach Bayesian bias bound Chapter choice classification computational consider constraints convergence convex corresponding datasets Definition described dual problem dual representation fat-shattering dimension feasibility gap feature mapping feature space finite Gaussian processes generalisation error geometric margin given Hence heuristics high dimensional Hilbert space hyperplane hypothesis inequality inner product space input space introduced iterative Karush-Kuhn-Tucker kernel function kernel matrix Lagrange multipliers Lagrangian learning algorithm linear functions linear learning machines loss function machine learning margin distribution margin slack vector maximal margin hyperplane maximise minimise norm objective function obtained on-line optimisation problem parameters perceptron perceptron algorithm performance positive semi-definite primal and dual quantity random examples real-valued function Remark result ridge regression Section sequence slack variables soft margin optimisation solution solve subset Support Vector Machines SVMs techniques Theorem training data training examples training points training set update Vapnik VC dimension weight vector zero