Learning Machines: Foundations of Trainable Pattern-classifying Systems |
Kirjan sisältä
Tulokset 1 - 3 kokonaismäärästä 27
Sivu 53
The notation used in Eq . ( 3 : 20 ) to describe the normal distribution can be
made more compact if we define and use the following matrices . Let the pattern
vector X be a column vector ( a 2 x 1 matrix ) with compoCategory 3 Category 2 ...
The notation used in Eq . ( 3 : 20 ) to describe the normal distribution can be
made more compact if we define and use the following matrices . Let the pattern
vector X be a column vector ( a 2 x 1 matrix ) with compoCategory 3 Category 2 ...
Sivu 58
where N . is the number of patterns in the training subset X ; ; ( X ) ; is called the
sample mean ( or center of gravity ) of the ith category , and ( E ) ; is called the
sample covariance matrix of the ith category . The ( X ) i and ( 2 ) i are reasonable
...
where N . is the number of patterns in the training subset X ; ; ( X ) ; is called the
sample mean ( or center of gravity ) of the ith category , and ( E ) ; is called the
sample covariance matrix of the ith category . The ( X ) i and ( 2 ) i are reasonable
...
Sivu 59
derived from the training set as if they were the known means and covariance
matrices . If we assume ... Suppose the pattern vectors belonging to category i
are normal with known covariance matrix Ei and unknown mean vector . Thus ,
the d ...
derived from the training set as if they were the known means and covariance
matrices . If we assume ... Suppose the pattern vectors belonging to category i
are normal with known covariance matrix Ei and unknown mean vector . Thus ,
the d ...
Mitä ihmiset sanovat - Kirjoita arvostelu
Yhtään arvostelua ei löytynyt.
Sisältö
Preface vii | 1 |
SOME NONPARAMETRIC TRAINING METHODS | 65 |
TRAINING THEOREMS | 79 |
Tekijänoikeudet | |
2 muita osia ei näytetty
Muita painoksia - Näytä kaikki
Yleiset termit ja lausekkeet
adjusted apply assume bank belonging to category called changes Chapter cluster committee components consider consists contains correction corresponding covariance decision surfaces define denote density depends derivation described Development discriminant functions discussed distance distribution element equal error-correction estimates example exists expression FIGURE fixed given implemented important initial layered machine linear dichotomies linear machine linearly separable matrix measurements negative networks normal Note optimum origin parameters partition pattern classifier pattern hyperplane pattern space pattern vector piecewise linear plane points positive presented probability problem proof properties proved PWL machine quadric reduced regions respect response rule sample mean selection separable shown side solution space Stanford step Suppose theorem theory threshold training methods training procedure training sequence training subsets transformation values weight vectors zero