Learning Machines: Foundations of Trainable Pattern-classifying Systems |
Kirjan sisältä
Tulokset 1 - 3 kokonaismäärästä 15
Sivu 20
... if a hyperplane exists which has each member of Xi on one side and each
member of X , on the other side . Because the decision regions of a linear
machine are convex 20 SOME IMPORTANT DISCRIMINANT FUNCTIONS.
... if a hyperplane exists which has each member of Xi on one side and each
member of X , on the other side . Because the decision regions of a linear
machine are convex 20 SOME IMPORTANT DISCRIMINANT FUNCTIONS.
Sivu 69
That is , W is either on the negative side of or on the pattern hyperplane
corresponding to Y . This error can be rectified by moving W to the positive side of
the pattern hyperplane . The most direct path to the other side is along a line
normal to ...
That is , W is either on the negative side of or on the pattern hyperplane
corresponding to Y . This error can be rectified by moving W to the positive side of
the pattern hyperplane . The most direct path to the other side is along a line
normal to ...
Sivu 71
For i = 0 , the weight point is not moved at all ; for i = 1 , the weight point is moved
to the pattern hyperplane ; and for 1 = 2 , the weight point is reflected across the
pattern hyperplane to a point an equal distance on the other side . The complete
...
For i = 0 , the weight point is not moved at all ; for i = 1 , the weight point is moved
to the pattern hyperplane ; and for 1 = 2 , the weight point is reflected across the
pattern hyperplane to a point an equal distance on the other side . The complete
...
Mitä ihmiset sanovat - Kirjoita arvostelu
Yhtään arvostelua ei löytynyt.
Sisältö
Preface vii | 1 |
SOME NONPARAMETRIC TRAINING METHODS | 65 |
TRAINING THEOREMS | 79 |
Tekijänoikeudet | |
2 muita osia ei näytetty
Muita painoksia - Näytä kaikki
Yleiset termit ja lausekkeet
adjusted apply assume bank belonging to category called changes Chapter cluster committee components consider consists contains correction corresponding covariance decision surfaces define denote density depends derivation described Development discriminant functions discussed distance distribution element equal error-correction estimates example exists expression FIGURE fixed given implemented important initial layered machine linear dichotomies linear machine linearly separable matrix measurements negative networks normal Note optimum origin parameters partition pattern classifier pattern hyperplane pattern space pattern vector piecewise linear plane points positive presented probability problem proof properties proved PWL machine quadric reduced regions respect response rule sample mean selection separable shown side solution space Stanford step Suppose theorem theory threshold training methods training procedure training sequence training subsets transformation values weight vectors zero