Learning Machines: Foundations of Trainable Pattern-classifying SystemsMcGraw-Hill, 1965 - 137 sivua |
Kirjan sisältä
Tulokset 1 - 3 kokonaismäärästä 12
Sivu 98
... committee ” of weight vectors W1 , W2 , and W , in Fig . 6.3 . With respect to these weight vectors , we have the inequalities 18 1 W2 . Y1 < 0 W1 . Y1 > 0 1 W1 . Y2 > 0 2 1 W1 . Y3 > 0 C • W2 Y2 > 0 W2 . Y3 < 0 ( 6.4 ) W3 . Y1 > 0 W3 ...
... committee ” of weight vectors W1 , W2 , and W , in Fig . 6.3 . With respect to these weight vectors , we have the inequalities 18 1 W2 . Y1 < 0 W1 . Y1 > 0 1 W1 . Y2 > 0 2 1 W1 . Y3 > 0 C • W2 Y2 > 0 W2 . Y3 < 0 ( 6.4 ) W3 . Y1 > 0 W3 ...
Sivu 99
... committee machine with a fixed vote - taking TLU . X x Pattern 2 = +1 d + 1 P committee TLUS ( first layer ) Response Vote - taking TLU ( second layer ) FIGURE 6.4 A committee machine 6.3 A training procedure for committee machines ...
... committee machine with a fixed vote - taking TLU . X x Pattern 2 = +1 d + 1 P committee TLUS ( first layer ) Response Vote - taking TLU ( second layer ) FIGURE 6.4 A committee machine 6.3 A training procedure for committee machines ...
Sivu 100
... committee TLUs have negative responses . If the responses of at least 1⁄2 ( | Nx + 1 ) of these negatively responding TLUS were changed from -1 to +1 , then the majority of the committee TLUS would have posi- tive responses , and the ...
... committee TLUs have negative responses . If the responses of at least 1⁄2 ( | Nx + 1 ) of these negatively responding TLUS were changed from -1 to +1 , then the majority of the committee TLUS would have posi- tive responses , and the ...
Sisältö
TRAINABLE PATTERN CLASSIFIERS | 1 |
PARAMETRIC TRAINING METHODS | 43 |
SOME NONPARAMETRIC TRAINING METHODS | 65 |
Tekijänoikeudet | |
3 muita osia ei näytetty
Muita painoksia - Näytä kaikki
Yleiset termit ja lausekkeet
adjusted apply assume bank called cells changes Chapter classifier cluster column committee machine components consider consists contains correction corresponding covariance decision surfaces define denote density depends described discriminant functions discussed distance distributions elements equal error-correction estimates example exist expression FIGURE fixed given implemented initial layered machine linear machine linearly separable lines majority matrix mean measurements modes negative networks nonparametric normal Note optimum origin parameters partition pattern hyperplane pattern space pattern vector pattern-classifying piecewise linear plane points positive presented probability problem properties PWL machine quadric regions respect response rule selection separable sequence side solution space Stanford step subsidiary discriminant Suppose theorem theory threshold training methods training patterns training procedure training sequence training subsets transformation values weight vectors X1 and X2 Y₁ zero
Viitteet tähän teokseen
A Probabilistic Theory of Pattern Recognition Luc Devroye,László Györfi,Gabor Lugosi Rajoitettu esikatselu - 1997 |