Learning Machines: Foundations of Trainable Pattern-classifying Systems |
Kirjan sisältä
Tulokset 1 - 3 kokonaismäärästä 9
Sivu 11
These theorems apply to a large class of discriminant functions and are therefore
of fundamental importance . The concept of a layered machine is introduced in
Chapter 6 . Most of the pattern classifiers containing threshold elements that have
...
These theorems apply to a large class of discriminant functions and are therefore
of fundamental importance . The concept of a layered machine is introduced in
Chapter 6 . Most of the pattern classifiers containing threshold elements that have
...
Sivu 116
Since the piecewise linear discriminant functions are not $ functions , the error -
correction training theorems proved in Chapter 5 do not apply to PWL machines .
The pattern capacity of PWL machines is also unknown . Even though well ...
Since the piecewise linear discriminant functions are not $ functions , the error -
correction training theorems proved in Chapter 5 do not apply to PWL machines .
The pattern capacity of PWL machines is also unknown . Even though well ...
Sivu 122
What is needed to apply the closest - mode method is a means of training a PWL
machine such that the modes are identified and the appropriate discriminant
functions are set up . This training process should be an iterative one , operating
on ...
What is needed to apply the closest - mode method is a means of training a PWL
machine such that the modes are identified and the appropriate discriminant
functions are set up . This training process should be an iterative one , operating
on ...
Mitä ihmiset sanovat - Kirjoita arvostelu
Yhtään arvostelua ei löytynyt.
Sisältö
Preface vii | 1 |
SOME NONPARAMETRIC TRAINING METHODS | 65 |
TRAINING THEOREMS | 79 |
Tekijänoikeudet | |
2 muita osia ei näytetty
Muita painoksia - Näytä kaikki
Yleiset termit ja lausekkeet
adjusted apply assume bank belonging to category called changes Chapter cluster committee components consider consists contains correction corresponding covariance decision surfaces define denote density depends derivation described Development discriminant functions discussed distance distribution element equal error-correction estimates example exists expression FIGURE fixed given implemented important initial layered machine linear dichotomies linear machine linearly separable matrix measurements negative networks normal Note optimum origin parameters partition pattern classifier pattern hyperplane pattern space pattern vector piecewise linear plane points positive presented probability problem proof properties proved PWL machine quadric reduced regions respect response rule sample mean selection separable shown side solution space Stanford step Suppose theorem theory threshold training methods training procedure training sequence training subsets transformation values weight vectors zero