Learning Machines: Foundations of Trainable Pattern-classifying Systems |
Kirjan sisältä
Tulokset 1 - 3 kokonaismäärästä 18
Sivu 56
3.9 Some special cases involving identical covariance matrices For the optimum discriminant functions for normal patterns , expansion of Eq . ( 3:31 ) yields g ( X ) = -12X42 ; -1X + X'2-1M ; - 22M ; * £ ; -1M ; + log pi – 12 log : ...
3.9 Some special cases involving identical covariance matrices For the optimum discriminant functions for normal patterns , expansion of Eq . ( 3:31 ) yields g ( X ) = -12X42 ; -1X + X'2-1M ; - 22M ; * £ ; -1M ; + log pi – 12 log : ...
Sivu 58
i = ; where Ni is the number of patterns in the training subset X ;; ( X ) ; is called the sample mean ( or center of gravity ) of the ith category , and ( 2 ) : is called the sample covariance matrix of the ith category .
i = ; where Ni is the number of patterns in the training subset X ;; ( X ) ; is called the sample mean ( or center of gravity ) of the ith category , and ( 2 ) : is called the sample covariance matrix of the ith category .
Sivu 59
derived from the training set as if they were the known means and covariance matrices . If we assume appropriate probability distributions for the unknown mean vectors and covariance matrices , we can derive a training process which ...
derived from the training set as if they were the known means and covariance matrices . If we assume appropriate probability distributions for the unknown mean vectors and covariance matrices , we can derive a training process which ...
Mitä ihmiset sanovat - Kirjoita arvostelu
Yhtään arvostelua ei löytynyt.
Muita painoksia - Näytä kaikki
Yleiset termit ja lausekkeet
adjusted apply assume bank belonging to category called changes Chapter classifier cluster committee components consider consists contains correction corresponding decision surfaces define denote density depends derivation described discriminant functions discussed distance distribution element equal error-correction estimates example exists expression FIGURE fixed gi(X given illustrated implemented important initial known layered machine linear dichotomies linear machine linearly separable negative normal Note optimum origin parameters partition pattern classifier pattern hyperplane pattern space pattern vector piecewise linear plane points positive presented probability problem proof properties proved PWL machine quadric reduced regions respect response rule sample mean selected separable shown side space specific Stanford step Suppose theorem theory threshold training methods training patterns training procedure training sequence training subsets transformation values weight vectors zero
Viitteet tähän teokseen
A Probabilistic Theory of Pattern Recognition Luc Devroye,László Györfi,Gabor Lugosi Rajoitettu esikatselu - 1997 |