A Probabilistic Theory of Pattern RecognitionSpringer Science & Business Media, 27.11.2013 - 638 sivua Pattern recognition presents one of the most significant challenges for scientists and engineers, and many different approaches have been proposed. The aim of this book is to provide a self-contained account of probabilistic analysis of these approaches. The book includes a discussion of distance measures, nonparametric methods based on kernels or nearest neighbors, Vapnik-Chervonenkis theory, epsilon entropy, parametric classification, error estimation, free classifiers, and neural networks. Wherever possible, distribution-free properties and inequalities are derived. A substantial portion of the results or the analysis is new. Over 430 problems and exercises complement the material. |
Kirjan sisältä
Tulokset 1 - 5 kokonaismäärästä 85
Sivu 18
... Show that L * ≤ min ( p , 1 - p ) , where P , 1 p are the class probabilities . Show that equality holds if X and Y are independent . Exhibit a distribution where X is not independent of Y , but L * = min ( p , 1 − p ) . - PROBLEM 2.4 ...
... Show that L * ≤ min ( p , 1 - p ) , where P , 1 p are the class probabilities . Show that equality holds if X and Y are independent . Exhibit a distribution where X is not independent of Y , but L * = min ( p , 1 − p ) . - PROBLEM 2.4 ...
Sivu 19
... show that the Bayes error for classification based upon ( T , B ) can be made as close as desired to 1/2 . ( 2 ) Let T and B be independent and exponentially distributed . Find a joint distribution of ( T , B , E ) such that the Bayes ...
... show that the Bayes error for classification based upon ( T , B ) can be made as close as desired to 1/2 . ( 2 ) Let T and B be independent and exponentially distributed . Find a joint distribution of ( T , B , E ) such that the Bayes ...
Sivu 20
... show that if we are given a deterministic sequence of density functions f , fi , ƒ2 , ƒ3 , . . . , then lim 004- น J Sn ( fn ( x ) − f ( x ) ) 2 dx = 0 implies lim n → ∞ [ \ f1 ( x ) − f ( x ) \ dx = = 0 . ( A function ƒ is called a ...
... show that if we are given a deterministic sequence of density functions f , fi , ƒ2 , ƒ3 , . . . , then lim 004- น J Sn ( fn ( x ) − f ( x ) ) 2 dx = 0 implies lim n → ∞ [ \ f1 ( x ) − f ( x ) \ dx = = 0 . ( A function ƒ is called a ...
Sivu 35
... Show that for all a Є [ 0 , 1/2 ] , there exists a distribution of ( X , Y ) such that LNN = L * = α . ( 2 ) Show that for all a Є [ 0 , 1/2 ] , there exists a distribution of ( X , Y ) such that LNN = α , L = √1 – 2α . ( 3 ) Show that ...
... Show that for all a Є [ 0 , 1/2 ] , there exists a distribution of ( X , Y ) such that LNN = L * = α . ( 2 ) Show that for all a Є [ 0 , 1/2 ] , there exists a distribution of ( X , Y ) such that LNN = α , L = √1 – 2α . ( 3 ) Show that ...
Sivu 36
... show that p = √√ p ( 1 − p ) e ̄12 / 8 ̧ where p is the Matushita error and △ is the Mahalanobis distance . PROBLEM 3.12 . For every 8 € [ 0 , ∞ ) and l * € [ 0 , 1/2 ] with / * ≤ 2 / ( 4 + 82 ) , find distribu- tions μo and μ1 ...
... show that p = √√ p ( 1 − p ) e ̄12 / 8 ̧ where p is the Matushita error and △ is the Mahalanobis distance . PROBLEM 3.12 . For every 8 € [ 0 , ∞ ) and l * € [ 0 , 1/2 ] with / * ≤ 2 / ( 4 + 82 ) , find distribu- tions μo and μ1 ...
Sisältö
1 | |
4 | |
21 | |
27 | |
54 | |
Nearest Neighbor Rules | 60 |
4 | 67 |
6 | 74 |
Parametric Classification | 263 |
Generalized Linear Discrimination | 279 |
Complexity Regularization | 289 |
Condensed and Edited Nearest Neighbor Rules 303 | 302 |
Tree Classifiers | 315 |
DataDependent Partitioning | 363 |
Splitting the Data 387 | 386 |
The Resubstitution Estimate | 397 |
11 | 81 |
2 | 92 |
6 | 100 |
8 | 106 |
2 | 113 |
Error Estimation | 120 |
The Regular Histogram Rule | 133 |
Kernel Rules | 153 |
Consistency of the kNearest Neighbor Rule | 168 |
VapnikChervonenkis Theory | 187 |
Combinatorial Aspects of VapnikChervonenkis Theory | 214 |
4 | 224 |
1 | 234 |
The Maximum Likelihood Principle | 249 |
Deleted Estimates of the Error Probability | 407 |
Automatic Kernel Rules 423 | 422 |
Automatic Nearest Neighbor Rules | 451 |
Hypercubes and Discrete Spaces 461 | 460 |
Epsilon Entropy and Totally Bounded Sets | 479 |
Uniform Laws of Large Numbers 489 | 488 |
Neural Networks | 507 |
Other Error Estimates | 549 |
Feature Extraction 561 | 560 |
Appendix | 575 |
Notation | 591 |
Author Index | 619 |
Subject Index | 627 |
Muita painoksia - Näytä kaikki
A Probabilistic Theory of Pattern Recognition Luc Devroye,László Györfi,Gabor Lugosi Rajoitettu esikatselu - 1997 |
A Probabilistic Theory of Pattern Recognition Luc Devroye,Laszlo Gyorfi,Gábor Lugosi Esikatselu ei käytettävissä - 2014 |
A Probabilistic Theory of Pattern Recognition Luc Devroye,László Györfi,Gabor Lugosi Esikatselu ei käytettävissä - 2013 |
Yleiset termit ja lausekkeet
a₁ algorithm Assume asymptotic b₁ Bayes error binary Cauchy-Schwarz inequality cells Chapter class of classifiers classification rule condition converges to zero Corollary data points decision defined deleted estimate denotes density Devroye distribution empirical error error estimate error probability example finite fixed function HINT histogram rule Hoeffding's inequality hyperplane hyperrectangles inequality integer Jensen's inequality k-d tree k-nearest k-NN rule kernel rule L(gn Lemma linear classifier maximum likelihood minimizing the empirical nearest neighbor rule neural network node obtained otherwise pairs parameters partition pattern recognition probability of error proof of Theorem Prove random variables rate of convergence rectangles risk minimization rule gn sample selected shatter coefficients Show sigmoid split squared error structural risk minimization subsets tree classifiers universally consistent upper bound values vc dimension vector X₁ Y₁ фес