Learning Machines: Foundations of Trainable Pattern-classifying Systems |
Kirjan sisältä
Tulokset 1 - 3 kokonaismäärästä 20
Sivu 24
Let us define the Euclidean distance d ( X , Pi ) from an arbitrary point X to the point set Pi by . d ( X , P :) min j = 1 , ... , L X – P ; ( : ' ) | ( 2:16 ) > That is , the distance between X and Pi is the smallest of the distances ...
Let us define the Euclidean distance d ( X , Pi ) from an arbitrary point X to the point set Pi by . d ( X , P :) min j = 1 , ... , L X – P ; ( : ' ) | ( 2:16 ) > That is , the distance between X and Pi is the smallest of the distances ...
Sivu 47
As in Chapter 1 , we define the discriminant function , g ( x ) = gi ( X ) - 92 ( X ) . If g ( x ) > 0 , the machine places X in category 1 ; if g ( x ) < 0 , the machine places X in category 2. From Eq . ( 3 • 76 ) we can derive g ...
As in Chapter 1 , we define the discriminant function , g ( x ) = gi ( X ) - 92 ( X ) . If g ( x ) > 0 , the machine places X in category 1 ; if g ( x ) < 0 , the machine places X in category 2. From Eq . ( 3 • 76 ) we can derive g ...
Sivu 128
Let us now define the real , diagonal matrices λ , 0 Di = : ] Apie and ( A.4 ) -pt1 0 D , = = 0 - 1 pipa Pi > а where λι , 1p , are the first pı diagonal elements of A , and 1p + 1 , . # p , + p , are the next p2 diagonal elements of A.
Let us now define the real , diagonal matrices λ , 0 Di = : ] Apie and ( A.4 ) -pt1 0 D , = = 0 - 1 pipa Pi > а where λι , 1p , are the first pı diagonal elements of A , and 1p + 1 , . # p , + p , are the next p2 diagonal elements of A.
Mitä ihmiset sanovat - Kirjoita arvostelu
Yhtään arvostelua ei löytynyt.
Sisältö
I | 1 |
SOME NONPARAMETRIC TRAINING METHODS | 65 |
APPENDIX | 127 |
Tekijänoikeudet | |
1 muita osia ei näytetty
Muita painoksia - Näytä kaikki
Yleiset termit ja lausekkeet
adjusted apply assume bank belonging to category called changes Chapter cluster committee components consider consists contains correction corresponding covariance decision surfaces define denote density depends derivation described discriminant functions discussed distance distribution element equal error-correction estimates example exists expression FIGURE fixed given implemented important initial layered machine linear dichotomies linear machine linearly separable matrix measurements negative normal Note optimum origin parameters partition pattern classifier pattern hyperplane pattern space pattern vector piecewise linear plane points positive presented probability problem proof properties proved PWL machine quadric reduced regions respect response rule sample mean selection separable shown side solution space specific Stanford step Suppose theorem theory threshold training methods training patterns training procedure training sequence training subsets transformation values weight vectors zero
Viitteet tähän teokseen
A Probabilistic Theory of Pattern Recognition Luc Devroye,László Györfi,Gabor Lugosi Rajoitettu esikatselu - 1997 |