Learning Machines: Foundations of Trainable Pattern-classifying SystemsMcGraw-Hill, 1965 - 137 sivua |
Kirjan sisältä
Tulokset 1 - 3 kokonaismäärästä 13
Sivu 49
... depends in a reasonable way on the proba- bilities involved . Note , for example , that the values of the a priori proba- bilities p ( 1 ) and 1 - p ( 1 ) affect only the value of wa + 1 . As category 1 becomes less probable a priori ...
... depends in a reasonable way on the proba- bilities involved . Note , for example , that the values of the a priori proba- bilities p ( 1 ) and 1 - p ( 1 ) affect only the value of wa + 1 . As category 1 becomes less probable a priori ...
Sivu 57
... depend on the values of the parameters of the individual probability distributions ; rather , it depends only on the form of the distributions . Even if the parameter values of the distribu- tions , the Σ ; and M1 , are not presently ...
... depend on the values of the parameters of the individual probability distributions ; rather , it depends only on the form of the distributions . Even if the parameter values of the distribu- tions , the Σ ; and M1 , are not presently ...
Sivu 104
... depends on the values of the weights in the first layer . For a given set of weights , the first layer will transform a finite set X of pattern vectors into a finite set g ( 1 ) of image - space vertices . Now looking at the second ...
... depends on the values of the weights in the first layer . For a given set of weights , the first layer will transform a finite set X of pattern vectors into a finite set g ( 1 ) of image - space vertices . Now looking at the second ...
Sisältö
TRAINABLE PATTERN CLASSIFIERS | 1 |
PARAMETRIC TRAINING METHODS | 43 |
SOME NONPARAMETRIC TRAINING METHODS | 65 |
Tekijänoikeudet | |
4 muita osia ei näytetty
Muita painoksia - Näytä kaikki
Yleiset termit ja lausekkeet
assume augmented pattern belonging to category Chapter cluster committee machine committee TLUS components correction increment covariance matrix d-dimensional decision surfaces denote diagonal matrix discussed dot products error-correction procedure Euclidean distance example Fix and Hodges gi(X given Hodges method hypersphere image-space implemented initial weight vectors ith bank layer of TLUS layered machine linear dichotomies linear discriminant functions linearly separable loss function mean vector minimum-distance classifier mode-seeking networks nonparametric number of patterns p₁ parameters parametric training partition pattern hyperplane pattern points pattern space pattern vector pattern-classifying patterns belonging perceptron piecewise linear plane point sets positive probability distributions prototype pattern PWL machine quadratic form quadric function rule sample covariance matrix shown in Fig solution weight vectors Stanford subsets X1 subsidiary discriminant functions Suppose terns TLU response training patterns training sequence training set training subsets transformation two-layer machine values W₁ weight point weight space weight-vector sequence X1 and X2 zero
Viitteet tähän teokseen
A Probabilistic Theory of Pattern Recognition Luc Devroye,László Györfi,Gabor Lugosi Rajoitettu esikatselu - 1997 |