前言 In writing this third edition of a classic book, I have been guided by the same uuderly hag philosophy of the first edition of the book:
Write an up wdate treatment of neural networks in a comprehensive, thorough, and read able manner.The new edition has been retitied Neural Networks and Learning Machines, in order toreflect two reahties: L The perceptron, the multilayer perceptroo, self organizing maps, and neuro
dynamics, to name a few topics, have always been considered integral parts of neural networks, rooted in ideas inspired by the human brain.2. Kernel methods, exemplified by support vector machines and kernel principal components analysis, are rooted in statistical learning theory.Although, indeed, they share many fundamental concepts and applications, there aresome subtle differences between the operations of neural networks and learning ma chines. The underlying subject matter is therefore much richer when they are studiedtogether, under one umbrella, particulasiy so when ideas drawn from neural networks and machine learning are hybridized to perform improved learning tasks beyond the capability of either one operating on its own, and ideas inspired by the human brain lead to new perspectives wherever they are of particular importance.
作者简介 Simon Haykin,于1953年获得英国伯明翰大学博士学位,目前为加拿大McMaster大学电子与计算机工程系教授、通信研究实验室主任。他是国际电子电气工程界的著名学者,曾获得IEEE McNaughton金奖。他是加拿大皇家学会院士、IEEE会士,在神经网络、通信、自适应滤波器等领域成果颇丰,著有多部标准教材。
目录 Preface
Acknowledgements
Abbreviations and Symbols
GLOSSARY
Introduction
1 Whatis aNeuralNetwork?
2 The Human Brain
3 Models of a Neuron
4 Neural Networks Viewed As Dirccted Graphs
5 Feedback
6 Network Architecturns
7 Knowledge Representation
8 Learning Processes
9 Learninglbks
10 Concluding Remarks
Notes and Rcferences
Chapter 1 Rosenblatts Perceptrou
1.1 Introduction
1.2 Perceptron
1.3 1he Pcrceptron Convergence Theorem
1.4 Relation Between the Perceptron and Bayes Classifier for a Gaussian Environment
1.5 Computer Experiment:Pattern Classification
1.6 The Batch Perceptron Algorithm
1.7 Summary and Discussion
Notes and Refercnces
Problems
Chapter 2 Model Building through Regression
2.1 Introduction 68
2.2 Linear Regression Model:Preliminary Considerafions
2.3 Maximum a Posteriori Estimation ofthe ParameterVector
2.4 Relationship Between Regularized Least-Squares Estimation and MAP Estimation
2.5 Computer Experiment:Pattern Classification
2.6 The Minimum.Description-Length Principle
2.7 Rnite Sample—Size Considerations
2.8 The Instrumental,variables Method
2 9 Summary and Discussion
Notes and References
Problems
Chapter 3 The Least—Mean-Square Algorithm
3.1 Introduction
3.2 Filtering Structure of the LMS Algorithm
3.3 Unconstrained optimization:a Review
3.4 ThC Wiener FiIter
3.5 ne Least.Mean.Square Algorithm
3.6 Markov Model Portraying the Deviation of the LMS Algorithm from the Wiener Filter
3.7 The Langevin Equation:Characterization ofBrownian Motion
3.8 Kushner’S Direct.Averaging Method
3.9 Statistical LMS Learning Iheory for Sinail Learning—Rate Parameter
以下为对购买帮助不大的评价