Learning Machines: Foundations of Trainable Pattern-classifying Systems |
Kirjan sisältä
Tulokset 1 - 3 kokonaismäärästä 24
Sivu 119
It should be observed that if the two density functions overlap sufficiently , it is likely that this optimum decision surface will not perfectly separate all the members of the two training subsets . If we were willing to assume ...
It should be observed that if the two density functions overlap sufficiently , it is likely that this optimum decision surface will not perfectly separate all the members of the two training subsets . If we were willing to assume ...
Sivu 120
function that depends on the geometric arrangement of the patterns in the training subsets . Many of these nonparametric rules actually lead to the same discriminant functions that would be obtained by parametric training and the ...
function that depends on the geometric arrangement of the patterns in the training subsets . Many of these nonparametric rules actually lead to the same discriminant functions that would be obtained by parametric training and the ...
Sivu 121
if N is the total number of patterns in the training subsets . The value of k / N , however , should decrease toward zero with increasing N. The high storage requirements of the Fix and Hodges method render it impractical in most ...
if N is the total number of patterns in the training subsets . The value of k / N , however , should decrease toward zero with increasing N. The high storage requirements of the Fix and Hodges method render it impractical in most ...
Mitä ihmiset sanovat - Kirjoita arvostelu
Yhtään arvostelua ei löytynyt.
Muita painoksia - Näytä kaikki
Yleiset termit ja lausekkeet
adjusted apply assume bank belonging to category called changes Chapter classifier cluster committee components consider consists contains correction corresponding decision surfaces define denote density depends derivation described discriminant functions discussed distance distribution element equal error-correction estimates example exists expression FIGURE fixed gi(X given illustrated implemented important initial known layered machine linear dichotomies linear machine linearly separable negative normal Note optimum origin parameters partition pattern classifier pattern hyperplane pattern space pattern vector piecewise linear plane points positive presented probability problem proof properties proved PWL machine quadric reduced regions respect response rule sample mean selected separable shown side space specific Stanford step Suppose theorem theory threshold training methods training patterns training procedure training sequence training subsets transformation values weight vectors zero
Viitteet tähän teokseen
A Probabilistic Theory of Pattern Recognition Luc Devroye,László Györfi,Gabor Lugosi Rajoitettu esikatselu - 1997 |