Learning Machines: Foundations of Trainable Pattern-classifying Systems |
Kirjan sisältä
Tulokset 1 - 3 kokonaismäärästä 58
Sivu 56
Each pattern belonging to the ith category is a random vector given by the sum of a fixed , non- random vector P ; plus a random " noise " vector N. The random vectors N ; ( i = 1 , R ) are drawn from the same normal distribution .
Each pattern belonging to the ith category is a random vector given by the sum of a fixed , non- random vector P ; plus a random " noise " vector N. The random vectors N ; ( i = 1 , R ) are drawn from the same normal distribution .
Sivu 104
For a given set of weights , the first layer will transform a finite set X of pattern vectors into a finite set g ( 1 ) of image - space vertices . Now looking at the second layer of TLUs , we can say that it trans- forms the vertices ...
For a given set of weights , the first layer will transform a finite set X of pattern vectors into a finite set g ( 1 ) of image - space vertices . Now looking at the second layer of TLUs , we can say that it trans- forms the vertices ...
Sivu 121
Thus each typical pattern for a given category might be thought of as a " mode " of the probability- density function for that category . We use the word mode here to denote the location of a local maximum in the probability - density ...
Thus each typical pattern for a given category might be thought of as a " mode " of the probability- density function for that category . We use the word mode here to denote the location of a local maximum in the probability - density ...
Mitä ihmiset sanovat - Kirjoita arvostelu
Yhtään arvostelua ei löytynyt.
Sisältö
TRAINABLE PATTERN CLASSIFIERS | 1 |
PARAMETRIC TRAINING METHODS | 43 |
SOME NONPARAMETRIC TRAINING METHODS | 65 |
Tekijänoikeudet | |
2 muita osia ei näytetty
Muita painoksia - Näytä kaikki
Yleiset termit ja lausekkeet
adjusted apply assume bank belonging to category called changes Chapter cluster committee components consider consists contains correction corresponding decision surfaces define denote density depends derivation described Development discriminant functions discussed distance distribution element equal error-correction estimates example exists expression FIGURE fixed given implemented important initial layered machine linear dichotomies linear discriminant functions linear machine linearly separable measurements negative networks normal Note optimum origin parameters partition pattern classifier pattern hyperplane pattern space pattern vector piecewise linear plane points positive presented probability problem proof properties proved PWL machine quadric reduced regions respect response rule sample mean selection separable shown side space Stanford step subsidiary discriminant Suppose theorem theory threshold training methods training procedure training sequence training subsets transformation values weight vectors zero
Viitteet tähän teokseen
A Probabilistic Theory of Pattern Recognition Luc Devroye,László Györfi,Gabor Lugosi Rajoitettu esikatselu - 1997 |