Learning Machines: Foundations of Trainable Pattern-classifying SystemsMcGraw-Hill, 1965 - 137 sivua |
Kirjan sisältä
Tulokset 1 - 3 kokonaismäärästä 17
Sivu 53
... distribution . The notation used in Eq . ( 3 · 20 ) to describe the normal distribution can be made more compact if we define and use the following matrices . Let the pattern vector X be a column vector ( a 2 X 1 matrix ) with compo ...
... distribution . The notation used in Eq . ( 3 · 20 ) to describe the normal distribution can be made more compact if we define and use the following matrices . Let the pattern vector X be a column vector ( a 2 X 1 matrix ) with compo ...
Sivu 54
... distribution which describes the joint probability density of d components . Patterns selected according to this joint proba- bility distribution will be called multivariate normal patterns or , more simply , normal patterns . The ...
... distribution which describes the joint probability density of d components . Patterns selected according to this joint proba- bility distribution will be called multivariate normal patterns or , more simply , normal patterns . The ...
Sivu 123
... distribution of which the points are samples . It is true that if the probability distribution has only one mode ( uni- modal ) , then the center of gravity of a set of points is often a good esti- mate for this mode . In multimodal ...
... distribution of which the points are samples . It is true that if the probability distribution has only one mode ( uni- modal ) , then the center of gravity of a set of points is often a good esti- mate for this mode . In multimodal ...
Sisältö
Preface vii | 11 |
PARAMETRIC TRAINING METHODS | 43 |
SOME NONPARAMETRIC TRAINING METHODS | 65 |
Tekijänoikeudet | |
6 muita osia ei näytetty
Muita painoksia - Näytä kaikki
Yleiset termit ja lausekkeet
assume belonging to category Chapter cluster committee machine committee TLUS correction increment covariance matrix decision surfaces denote diagonal matrix discussed dot products error-correction procedure Euclidean distance example Fix and Hodges function g(X g₁(X given Hodges method hypersphere image-space implemented initial weight vectors ith bank layer of TLUS layered machine linear dichotomies linear discriminant functions linearly separable loss function mean vector minimum-distance classifier mode-seeking networks nonparametric number of patterns p₁ parameters parametric training partition pattern hyperplane pattern points pattern space pattern vector pattern-classifying patterns belonging perceptron piecewise linear plane point sets positive probability distributions prototype pattern PWL machine quadratic form quadric function rule sample covariance matrix shown in Fig solution weight vectors Stanford subsets X1 subsidiary discriminant functions Suppose terns TLU response training patterns training sequence training set training subsets transformation two-layer machine values W₁ wa+1 weight point weight space weight-vector sequence X1 and X2 zero
Viitteet tähän teokseen
A Probabilistic Theory of Pattern Recognition Luc Devroye,László Györfi,Gabor Lugosi Rajoitettu esikatselu - 1997 |