Learning Machines: Foundations of Trainable Pattern-classifying SystemsMcGraw-Hill, 1965 - 137 sivua |
Kirjan sisältä
Tulokset 1 - 3 kokonaismäärästä 9
Sivu 83
... initial weight vector Ŵ1 . j Since for each Ŷ ; in Sŷ and Ŵ ; in Sŵ , Ŷ ; • Ŵ ; ≤0 , we have from Eq . ( 5.8 ) Ŵk + 1 = 1 1 Ŵ1 + Ŷ1 + Ŷ1⁄2 + + Ŷk 2 We shall prove the theorem for the case Ŵ1 = ( 5.9 ) O , although essentially the same ...
... initial weight vector Ŵ1 . j Since for each Ŷ ; in Sŷ and Ŵ ; in Sŵ , Ŷ ; • Ŵ ; ≤0 , we have from Eq . ( 5.8 ) Ŵk + 1 = 1 1 Ŵ1 + Ŷ1 + Ŷ1⁄2 + + Ŷk 2 We shall prove the theorem for the case Ŵ1 = ( 5.9 ) O , although essentially the same ...
Sivu 88
... initial weight vectors ; Y belongs to one of the train- ing subsets , say Yi . Then , either or ( a ) W , ( k ) . Yk > W ̧ ( k ) • Yk ( b ) there exists some l j = 1 , ... , R , ji = 1 , • W ( ) . YkW , ( k ) . Yk j = R , li for which R ...
... initial weight vectors ; Y belongs to one of the train- ing subsets , say Yi . Then , either or ( a ) W , ( k ) . Yk > W ̧ ( k ) • Yk ( b ) there exists some l j = 1 , ... , R , ji = 1 , • W ( ) . YkW , ( k ) . Yk j = R , li for which R ...
Sivu 91
... initial weight vector W1 we may remove from the training sequence those patterns Y ' for which Wx Y ' > 0. The reduced training sequence Sy then creates a reduced weight - vector sequence Sŵ such that · k k ( 5.37 ) for all Ŷ in St and ...
... initial weight vector W1 we may remove from the training sequence those patterns Y ' for which Wx Y ' > 0. The reduced training sequence Sy then creates a reduced weight - vector sequence Sŵ such that · k k ( 5.37 ) for all Ŷ in St and ...
Sisältö
TRAINABLE PATTERN CLASSIFIERS | 1 |
SOME NONPARAMETRIC TRAINING METHODS | 65 |
LAYERED MACHINES | 95 |
Tekijänoikeudet | |
1 muita osia ei näytetty
Muita painoksia - Näytä kaikki
Yleiset termit ja lausekkeet
assume belonging to category Chapter cluster committee machine committee TLUS components correction increment covariance matrix decision surfaces denote diagonal matrix dot products error-correction procedure Euclidean distance example Fix and Hodges function g(X g₁(X gi(X given Hodges method hypersphere image-space implemented initial weight vectors ith bank layer of TLUS layered machine linear dichotomies linear discriminant functions linearly separable loss function mean vector minimum-distance classifier mode-seeking networks nonparametric number of patterns p₁ parameters parametric training partition pattern hyperplane pattern points pattern space pattern vector pattern-classifying patterns belonging perceptron piecewise linear plane point sets positive probability distributions prototype pattern PWL machine quadratic form quadric function rule sample covariance matrix shown in Fig solution weight vector Stanford subsets X1 subsidiary discriminant functions Suppose terns training patterns training sequence training set training subsets transformation two-layer machine values W₁ wa+1 weight point weight space weight-vector sequence X1 and X2 zero
Viitteet tähän teokseen
A Probabilistic Theory of Pattern Recognition Luc Devroye,László Györfi,Gabor Lugosi Rajoitettu esikatselu - 1997 |