Learning Machines: Foundations of Trainable Pattern-classifying SystemsMcGraw-Hill, 1965 - 137 sivua |
Kirjan sisältä
Tulokset 1 - 3 kokonaismäärästä 28
Sivu 50
... normal or Gaussian probability - density function is important because of its computational simplicity and because it represents a realistic model of many pattern - classification situations . Furthermore , normal distributions ...
... normal or Gaussian probability - density function is important because of its computational simplicity and because it represents a realistic model of many pattern - classification situations . Furthermore , normal distributions ...
Sivu 54
... normal distribution which describes the joint probability density of d components . Patterns selected according to this joint proba- bility distribution will be called multivariate normal patterns or , more simply , normal patterns ...
... normal distribution which describes the joint probability density of d components . Patterns selected according to this joint proba- bility distribution will be called multivariate normal patterns or , more simply , normal patterns ...
Sivu 55
... normal patterns We are now ready to derive the optimum classifier for normal patterns . We shall temporarily assume that for each category i , where i = 1 , R , we know the a priori probability p ( i ) and the particular d - variate ...
... normal patterns We are now ready to derive the optimum classifier for normal patterns . We shall temporarily assume that for each category i , where i = 1 , R , we know the a priori probability p ( i ) and the particular d - variate ...
Sisältö
TRAINABLE PATTERN CLASSIFIERS | 1 |
PARAMETRIC TRAINING METHODS | 43 |
SOME NONPARAMETRIC TRAINING METHODS | 65 |
Tekijänoikeudet | |
4 muita osia ei näytetty
Muita painoksia - Näytä kaikki
Yleiset termit ja lausekkeet
assume belonging to category cluster committee machine committee TLUS components correction increment covariance matrix decision surfaces denote diagonal matrix dot products error-correction procedure Euclidean distance example Fix and Hodges function g(X g₁(X gi(X given Hodges method hypersphere image-space implemented initial weight vectors ith bank layer of TLUS layered machine linear dichotomies linear discriminant functions linearly separable loss function mean vector minimum-distance classifier mode-seeking networks nonparametric number of patterns p₁ parameters parametric training partition pattern hyperplane pattern points pattern space pattern vector pattern-classifying patterns belonging perceptron piecewise linear point sets positive probability distributions prototype pattern PWL machine quadratic form quadric function rule sample covariance matrix shown in Fig solution weight vectors subsets X1 subsidiary discriminant functions Suppose terns TLU response training patterns training sequence training set training subsets transformation two-layer machine values W₁ wa+1 weight point weight space weight-vector sequence X1 and X2 zero
Viitteet tähän teokseen
A Probabilistic Theory of Pattern Recognition Luc Devroye,László Györfi,Gabor Lugosi Rajoitettu esikatselu - 1997 |