Learning Machines: Foundations of Trainable Pattern-classifying SystemsMcGraw-Hill, 1965 - 137 sivua |
Kirjan sisältä
Tulokset 1 - 3 kokonaismäärästä 5
Sivu 100
... majority of the weight vectors have negative dot products with Y. Let the weight vec- tors at this stage be given by W1 ( * ) , W2 ( * ) , and Wp ( ) . · In describing the rule for modifying the weight vectors we shall make use of the ...
... majority of the weight vectors have negative dot products with Y. Let the weight vec- tors at this stage be given by W1 ( * ) , W2 ( * ) , and Wp ( ) . · In describing the rule for modifying the weight vectors we shall make use of the ...
Sivu 101
... majority of the committee TLUS to respond negatively , we adjust the 2 ( | N | + 1 ) weight vectors making the least- negative ( but not positive ) dot products with Yk . If the weight vector W ( ) is among this set of 2 ( | N | + 1 ) ...
... majority of the committee TLUS to respond negatively , we adjust the 2 ( | N | + 1 ) weight vectors making the least- negative ( but not positive ) dot products with Yk . If the weight vector W ( ) is among this set of 2 ( | N | + 1 ) ...
Sivu 102
... majority make positive dot products with each of the pattern vectors Y1 , Y2 , Ya ; then , adjustments to the weight vector ( s ) are made whenever N < 0. ( The reader could assume , for example , that Y1 contains Y2 and that y2 ...
... majority make positive dot products with each of the pattern vectors Y1 , Y2 , Ya ; then , adjustments to the weight vector ( s ) are made whenever N < 0. ( The reader could assume , for example , that Y1 contains Y2 and that y2 ...
Sisältö
TRAINABLE PATTERN CLASSIFIERS | 1 |
PARAMETRIC TRAINING METHODS | 43 |
SOME NONPARAMETRIC TRAINING METHODS | 65 |
Tekijänoikeudet | |
3 muita osia ei näytetty
Muita painoksia - Näytä kaikki
Yleiset termit ja lausekkeet
adjusted apply assume bank called cells changes Chapter classifier cluster column committee machine components consider consists contains correction corresponding covariance decision surfaces define denote density depends described discriminant functions discussed distance distributions elements equal error-correction estimates example exist expression FIGURE fixed given implemented initial layered machine linear machine linearly separable lines majority matrix mean measurements modes negative networks nonparametric normal Note optimum origin parameters partition pattern hyperplane pattern space pattern vector pattern-classifying piecewise linear plane points positive presented probability problem properties PWL machine quadric regions respect response rule selection separable sequence side solution space step subsidiary discriminant Suppose terns theorem theory threshold training methods training patterns training procedure training sequence training subsets transformation values weight vectors X1 and X2 Y₁ zero
Viitteet tähän teokseen
A Probabilistic Theory of Pattern Recognition Luc Devroye,László Györfi,Gabor Lugosi Rajoitettu esikatselu - 1997 |