Learning Machines: Foundations of Trainable Pattern-classifying SystemsMcGraw-Hill, 1965 - 137 sivua |
Kirjan sisältä
Tulokset 1 - 3 kokonaismäärästä 16
Sivu 20
... linear and the subsets X1 , X2 , linearly separable if and only if linear discriminant functions g1 , 92 , ... , GR exist such that Ꭱ 1 • " XR are 1 1 gi ( X ) > g , ( X ) j = 1 , . for all X in Xi ( 2.9 ) " R , j #i for all i = 1 , Ꭱ ...
... linear and the subsets X1 , X2 , linearly separable if and only if linear discriminant functions g1 , 92 , ... , GR exist such that Ꭱ 1 • " XR are 1 1 gi ( X ) > g , ( X ) j = 1 , . for all X in Xi ( 2.9 ) " R , j #i for all i = 1 , Ꭱ ...
Sivu 21
... linearly separable , then to show that if the subsets X1 , X2 , each pair of subsets Xi , X ,, i , j = 1 , ... 9 R , ij , is also linearly sepa- rable . That is , if X1 , X2 , . . . , XR are linearly separable , then X1 , X2 , ... , XR ...
... linearly separable , then to show that if the subsets X1 , X2 , each pair of subsets Xi , X ,, i , j = 1 , ... 9 R , ij , is also linearly sepa- rable . That is , if X1 , X2 , . . . , XR are linearly separable , then X1 , X2 , ... , XR ...
Sivu 107
... linearly separable . For any given training subsets X1 and X2 it would be of interest to know neces- sary and sufficient conditions on the hyperplanes implemented by the first - layer TLUs such that g1 ( 1 ) and 2 ( 1 ) are linearly ...
... linearly separable . For any given training subsets X1 and X2 it would be of interest to know neces- sary and sufficient conditions on the hyperplanes implemented by the first - layer TLUs such that g1 ( 1 ) and 2 ( 1 ) are linearly ...
Sisältö
TRAINABLE PATTERN CLASSIFIERS | 1 |
PARAMETRIC TRAINING METHODS | 43 |
SOME NONPARAMETRIC TRAINING METHODS | 65 |
Tekijänoikeudet | |
3 muita osia ei näytetty
Muita painoksia - Näytä kaikki
Yleiset termit ja lausekkeet
adjusted apply assume bank called cells changes Chapter classifier cluster column committee machine components consider consists contains correction corresponding covariance decision surfaces define denote density depends described discriminant functions discussed distance distributions elements equal error-correction estimates example exist expression FIGURE fixed given implemented initial layered machine linear machine linearly separable lines majority matrix mean measurements modes negative networks nonparametric normal Note optimum origin parameters partition pattern hyperplane pattern space pattern vector pattern-classifying piecewise linear plane points positive presented probability problem properties PWL machine quadric regions respect response rule selection separable sequence side solution space step subsidiary discriminant Suppose terns theorem theory threshold training methods training patterns training procedure training sequence training subsets transformation values weight vectors X1 and X2 Y₁ zero
Viitteet tähän teokseen
A Probabilistic Theory of Pattern Recognition Luc Devroye,László Györfi,Gabor Lugosi Rajoitettu esikatselu - 1997 |