Learning Machines: Foundations of Trainable Pattern-classifying Systems |
Kirjan sisältä
Tulokset 1 - 3 kokonaismäärästä 26
Sivu 20
Note that the decision surfaces are segments of hyperplanes ( lines for d = 2 ) ,
and that S12 is redundant . In the special case in which the linear machine is a
minimum - distance classifier , the surface Sij is the hyperplane which is the ...
Note that the decision surfaces are segments of hyperplanes ( lines for d = 2 ) ,
and that S12 is redundant . In the special case in which the linear machine is a
minimum - distance classifier , the surface Sij is the hyperplane which is the ...
Sivu 23
where lw = VÉ Wit Note from Fig . 2 . 5 that the absolute value of n · P is the
normal Euclidean distance from the origin to the hyperplane . We shall denote
this distance by the symbol Aw , which we set equal to Wat1 / w ) . ( If Aw > 0 , the
origin ...
where lw = VÉ Wit Note from Fig . 2 . 5 that the absolute value of n · P is the
normal Euclidean distance from the origin to the hyperplane . We shall denote
this distance by the symbol Aw , which we set equal to Wat1 / w ) . ( If Aw > 0 , the
origin ...
Sivu 39
... m versus à for various values of M Note the pronounced threshold effect , for
large M + 1 , around 1 = 2 . Also note that for each value of M P2 ( M + 1 ) , M = 42
( 2 : 45 ) The threshold effect around 2 ( M + 1 ) can be expressed quantitively by
...
... m versus à for various values of M Note the pronounced threshold effect , for
large M + 1 , around 1 = 2 . Also note that for each value of M P2 ( M + 1 ) , M = 42
( 2 : 45 ) The threshold effect around 2 ( M + 1 ) can be expressed quantitively by
...
Mitä ihmiset sanovat - Kirjoita arvostelu
Yhtään arvostelua ei löytynyt.
Sisältö
Preface vii | 1 |
SOME NONPARAMETRIC TRAINING METHODS | 65 |
TRAINING THEOREMS | 79 |
Tekijänoikeudet | |
2 muita osia ei näytetty
Muita painoksia - Näytä kaikki
Yleiset termit ja lausekkeet
adjusted apply assume bank belonging to category called changes Chapter cluster committee components consider consists contains correction corresponding covariance decision surfaces define denote density depends derivation described Development discriminant functions discussed distance distribution element equal error-correction estimates example exists expression FIGURE fixed given implemented important initial layered machine linear dichotomies linear machine linearly separable matrix measurements negative networks normal Note optimum origin parameters partition pattern classifier pattern hyperplane pattern space pattern vector piecewise linear plane points positive presented probability problem proof properties proved PWL machine quadric reduced regions respect response rule sample mean selection separable shown side solution space Stanford step Suppose theorem theory threshold training methods training procedure training sequence training subsets transformation values weight vectors zero