Learning Machines: Foundations of Trainable Pattern-classifying Systems |
Kirjan sisältä
Tulokset 1 - 3 kokonaismäärästä 7
Sivu 44
The parametric training method for the design of discriminant functions then
consists of three steps : 1 . ... presumed to be the true values of the parameters
and are used in the expressions for the discriminant functions developed in step
1 .
The parametric training method for the design of discriminant functions then
consists of three steps : 1 . ... presumed to be the true values of the parameters
and are used in the expressions for the discriminant functions developed in step
1 .
Sivu 86
Foundations of Trainable Pattern-classifying Systems Nils J. Nilsson. We now
compute the decrease in squared distance to W , dx + 1 , effected by the kth step
dx + 1 = 1W – W kl ? – W – Wx + 1 / 2 ( 5 . 24 ) Let Yx be the kth pattern vector in
the ...
Foundations of Trainable Pattern-classifying Systems Nils J. Nilsson. We now
compute the decrease in squared distance to W , dx + 1 , effected by the kth step
dx + 1 = 1W – W kl ? – W – Wx + 1 / 2 ( 5 . 24 ) Let Yx be the kth pattern vector in
the ...
Sivu 89
The first step is to generate a new set Z of higher - dimensional vectors from the
training set Y . Each vector Z in Z is of RD dimensions ; it will be convenient to
think of the RD dimensions of Z as being split into R blocks of D dimensions each
.
The first step is to generate a new set Z of higher - dimensional vectors from the
training set Y . Each vector Z in Z is of RD dimensions ; it will be convenient to
think of the RD dimensions of Z as being split into R blocks of D dimensions each
.
Mitä ihmiset sanovat - Kirjoita arvostelu
Yhtään arvostelua ei löytynyt.
Sisältö
Preface vii | 1 |
SOME NONPARAMETRIC TRAINING METHODS | 65 |
TRAINING THEOREMS | 79 |
Tekijänoikeudet | |
2 muita osia ei näytetty
Muita painoksia - Näytä kaikki
Yleiset termit ja lausekkeet
adjusted apply assume bank belonging to category called changes Chapter cluster committee components consider consists contains correction corresponding covariance decision surfaces define denote density depends derivation described Development discriminant functions discussed distance distribution element equal error-correction estimates example exists expression FIGURE fixed given implemented important initial layered machine linear dichotomies linear machine linearly separable matrix measurements negative networks normal Note optimum origin parameters partition pattern classifier pattern hyperplane pattern space pattern vector piecewise linear plane points positive presented probability problem proof properties proved PWL machine quadric reduced regions respect response rule sample mean selection separable shown side solution space Stanford step Suppose theorem theory threshold training methods training procedure training sequence training subsets transformation values weight vectors zero