目录 Preface Notation 1 The Learning Methodology 1.1 Supervised Learning 1.2 Learning and Generalisation 1.3 Improving Generalisation 1.4 Attractions and Drawbacks of Learning 1.5 Support Vector Machines for Learning 1.6 Exercises 1.7 Further Reading and Advanced Topics 2 Linear Learning Machines 2.1 Linear Classification 2.1.1 Rosenblatt's Perceptron 2.1.2 Other Linear Classmers 2.1.3 Multi-class Discfimination 2.2 Linear Regression 2.2.1 Least Squares 2.2.2 Ridge Regression 2.3 Dual Representation of Linear Machines 2.4 Exercises 2.5 Further Reading and Advanced Topics 3 Kernel-Induced Fleature Spaces 3.1 Learning jn Feature Space 3.2 The Implicit Mapping into Feature Space 3.3 Making Kernels 3.3.1 Characterisation of Kernels 3.3.2 Making Kernels from Kernels 3.3.3 Making Kernels from Features 3.4 Working in Feature Space 3.5 Kernels and Gaussian Processes 3.6 Exercises 3.7 Further Reading and Advanced Topics 4 Generalisation Theory 4.1 Probably Approximately Correct Learning 4.2 Vapnik Chenronenkis (VC) Theory 4.3 Margin-Based Bounds on Generalisation 4.3.1 Maximal Margin Bounds 4.3.2 Margin Percentile Bounds 4.3.3 Soft Margin Bounds 4.4 Other Bounds on Generalisation and Luckiness 4.5 Generalisation for Regression 4.6 Bayesian Analysis of Learning 4.7 Exercises 4.8 Further Reading and Advanced Topics 5 Optimisation Theory 5.1 Problem Formulation 5.2 Lagrangian Theory 5.3 Duality 5.4 Exercises 5.5 Further Reading and Advanced Topics
以下为对购买帮助不大的评价