Learning Machines: Foundations of Trainable Pattern-classifying SystemsMcGraw-Hill, 1965 - 137 sivua |
Kirjan sisältä
Tulokset 1 - 3 kokonaismäärästä 9
Sivu 53
... column vector ( a 2 × 1 matrix ) with compo- Category 2 x2 Category 3 Category 1 Category 4 FIGURE 3.3 Ellipsoidal clusters of patterns nents x and x2 . Similarly , let the mean vector M be a column vector with components m1 and m2 . We ...
... column vector ( a 2 × 1 matrix ) with compo- Category 2 x2 Category 3 Category 1 Category 4 FIGURE 3.3 Ellipsoidal clusters of patterns nents x and x2 . Similarly , let the mean vector M be a column vector with components m1 and m2 . We ...
Sivu 108
... columns of M be a column of zeros . Let us delete this column from M to form a square P × P matrix ŵ . Each column of M is a vertex belonging to either ( 1 ) or ( 1 ) . * In the following proof we do not make use of the fact that the ...
... columns of M be a column of zeros . Let us delete this column from M to form a square P × P matrix ŵ . Each column of M is a vertex belonging to either ( 1 ) or ( 1 ) . * In the following proof we do not make use of the fact that the ...
Sivu 109
... column of M is a vertex belonging to g1 ( 1 ) if the ith column of M is a vertex belonging to 2 ( 1 ) ( 6.10 ) Since M has an inverse ( it has rank equal to P ) , we can always solve for w by W = CM - 1 ( 6.11 ) Thus , since a threshold ...
... column of M is a vertex belonging to g1 ( 1 ) if the ith column of M is a vertex belonging to 2 ( 1 ) ( 6.10 ) Since M has an inverse ( it has rank equal to P ) , we can always solve for w by W = CM - 1 ( 6.11 ) Thus , since a threshold ...
Sisältö
TRAINABLE PATTERN CLASSIFIERS | 1 |
SOME NONPARAMETRIC TRAINING METHODS | 65 |
TRAINING THEOREMS | 79 |
Tekijänoikeudet | |
3 muita osia ei näytetty
Muita painoksia - Näytä kaikki
Yleiset termit ja lausekkeet
assume augmented pattern belonging to category Chapter cluster committee machine committee TLUS correction increment covariance matrix d-dimensional decision surfaces denote diagonal matrix discussed dot products error-correction procedure Euclidean distance example Fix and Hodges function g(X g₁(X given Hodges method hypersphere image-space implemented initial weight vectors ith bank layer of TLUS layered machine linear dichotomies linear discriminant functions linearly separable loss function mean vector minimum-distance classifier mode-seeking networks nonparametric number of patterns p₁ parameters partition pattern classifier pattern hyperplane pattern space pattern vector patterns belonging perceptron piecewise linear plane point sets positive probability distributions prototype pattern PWL machine quadratic form quadric function rule sample covariance matrix shown in Fig solution weight vectors Stanford subsets X1 subsidiary discriminant functions Suppose terns TLU response training patterns training sequence training set training subsets transformation two-layer machine values W₁ weight point weight space weight-vector sequence X1 and X2 zero
Viitteet tähän teokseen
A Probabilistic Theory of Pattern Recognition Luc Devroye,László Györfi,Gabor Lugosi Rajoitettu esikatselu - 1997 |