Learning Machines: Foundations of Trainable Pattern-classifying SystemsMcGraw-Hill, 1965 - 137 sivua |
Kirjan sisältä
Tulokset 1 - 3 kokonaismäärästä 16
Sivu 20
... linear and the subsets X1 , X2 , linearly separable if and only if linear discriminant functions g1 , 92 , exist such that • • · 9 XR are ... , 9R for all X in Xi • • R , ji for all i = 1 , R ( 2.9 ) · " gi ( X ) > gi ( X ) j = 1 , = As ...
... linear and the subsets X1 , X2 , linearly separable if and only if linear discriminant functions g1 , 92 , exist such that • • · 9 XR are ... , 9R for all X in Xi • • R , ji for all i = 1 , R ( 2.9 ) · " gi ( X ) > gi ( X ) j = 1 , = As ...
Sivu 21
... linearly separable , then 1 , ... , R , ij , is also linearly sepa- XR are linearly separable , then X1 , X2 , XR are also pairwise linearly separable . 2.6 The threshold logic unit ( TLU ) If R = 2 , a linear machine employs a single ...
... linearly separable , then 1 , ... , R , ij , is also linearly sepa- XR are linearly separable , then X1 , X2 , XR are also pairwise linearly separable . 2.6 The threshold logic unit ( TLU ) If R = 2 , a linear machine employs a single ...
Sivu 87
... linearly separable if and only if there exist R solution weight vectors W1 , W2 , WR such that • • · i Y. WY . W for each Ye Yi i , j = . 1 , " R , ij ( 528 ) · If the subsets are linearly separable , then a linear machine exists which ...
... linearly separable if and only if there exist R solution weight vectors W1 , W2 , WR such that • • · i Y. WY . W for each Ye Yi i , j = . 1 , " R , ij ( 528 ) · If the subsets are linearly separable , then a linear machine exists which ...
Sisältö
TRAINABLE PATTERN CLASSIFIERS | 1 |
PARAMETRIC TRAINING METHODS | 43 |
SOME NONPARAMETRIC TRAINING METHODS | 65 |
Tekijänoikeudet | |
3 muita osia ei näytetty
Muita painoksia - Näytä kaikki
Yleiset termit ja lausekkeet
assume augmented pattern belonging to category Chapter cluster committee machine committee TLUS components correction increment covariance matrix d-dimensional decision surfaces denote diagonal matrix discussed dot products error-correction procedure Euclidean distance example Fix and Hodges g₁(X given Hodges method hypersphere image-space implemented initial weight vectors ith bank layer of TLUS layered machine linear dichotomies linear discriminant functions linearly separable loss function mean vector minimum-distance classifier mode-seeking networks nonparametric number of patterns p₁ parameters parametric training partition pattern hyperplane pattern points pattern space pattern vector pattern-classifying patterns belonging perceptron piecewise linear plane point sets positive probability distributions prototype pattern PWL machine quadratic form quadric function rule sample covariance matrix second layer shown in Fig solution weight vectors Stanford subsets X1 subsidiary discriminant functions Suppose terns training patterns training sequence training set training subsets transformation two-layer machine values W₁ weight point weight space weight-vector sequence X1 and X2 zero
Viitteet tähän teokseen
A Probabilistic Theory of Pattern Recognition Luc Devroye,László Györfi,Gabor Lugosi Rajoitettu esikatselu - 1997 |