Learning Machines: Foundations of Trainable Pattern-classifying SystemsMcGraw-Hill, 1965 - 137 sivua |
Kirjan sisältä
Tulokset 1 - 3 kokonaismäärästä 25
Sivu 101
... adjusted are those which have dot products closest to zero . ( Ties are resolved arbitrarily . ) These , in one sense , are the easiest to adjust . The adjustment is achieved by the familiar process of adding ( or subtracting ) the ...
... adjusted are those which have dot products closest to zero . ( Ties are resolved arbitrarily . ) These , in one sense , are the easiest to adjust . The adjustment is achieved by the familiar process of adding ( or subtracting ) the ...
Sivu 103
... adjusted , and W1 and W would wander around perpetually in a futile search for stable locations , which do not exist so long as W2 cannot cooperate by leaving its initial region . * This same phenomenon accounts for instances in which ...
... adjusted , and W1 and W would wander around perpetually in a futile search for stable locations , which do not exist so long as W2 cannot cooperate by leaving its initial region . * This same phenomenon accounts for instances in which ...
Sivu 123
... adjustments to the ( d + 1 ) st components . Suppose that the jth weight vector in this bank is the closest one to X + 1 . Then , only this closest weight vector is adjusted and all other weight vectors ( including all those in the ...
... adjustments to the ( d + 1 ) st components . Suppose that the jth weight vector in this bank is the closest one to X + 1 . Then , only this closest weight vector is adjusted and all other weight vectors ( including all those in the ...
Sisältö
TRAINABLE PATTERN CLASSIFIERS | 1 |
PARAMETRIC TRAINING METHODS | 43 |
SOME NONPARAMETRIC TRAINING METHODS | 65 |
Tekijänoikeudet | |
4 muita osia ei näytetty
Muita painoksia - Näytä kaikki
Yleiset termit ja lausekkeet
assume belonging to category cluster committee machine committee TLUS components correction increment covariance matrix decision surfaces denote diagonal matrix dot products error-correction procedure Euclidean distance example Fix and Hodges function g(X g₁(X gi(X given Hodges method hypersphere image-space implemented initial weight vectors ith bank layer of TLUS layered machine linear dichotomies linear discriminant functions linearly separable loss function mean vector minimum-distance classifier mode-seeking networks nonparametric number of patterns p₁ parameters parametric training partition pattern hyperplane pattern points pattern space pattern vector pattern-classifying patterns belonging perceptron piecewise linear point sets positive probability distributions prototype pattern PWL machine quadratic form quadric function rule sample covariance matrix shown in Fig solution weight vectors subsets X1 subsidiary discriminant functions Suppose terns TLU response training patterns training sequence training set training subsets transformation two-layer machine values W₁ wa+1 weight point weight space weight-vector sequence X1 and X2 zero
Viitteet tähän teokseen
A Probabilistic Theory of Pattern Recognition Luc Devroye,László Györfi,Gabor Lugosi Rajoitettu esikatselu - 1997 |