¥ 76 九品
库存2件
作者[美]戴梅尔 著
出版社清华大学出版社
出版时间2011-02
版次1
装帧平装
上书时间2024-05-28
Preface
1 Introduction
1.1 Basic Notation
1.2 Standard Problems of Numerical Linear Algebra
1.3 General Techniques
1.3.1 Matrix Factorizations
1.3.2 Perturbation Theory and Condition Numbers
1.3.3 Effects of Roundoff Error on Algorithms
1.3.4 Analyzing the Speed of Algorithms
1.3.5 Engineering Numerical Software
1.4 Example: Polynomial Evaluation
1.5 Floating Point Arithmetic
1.5.1 Further Details
1.6 Polynomial Evaluation Revisited
1.7 Vector and Matrix Norms
1.8 References and Other Topics for Chapter 1
1.9 Questions for Chapter 1
2 Linear Equation Solving
2.1 Introduction
2.2 Perturbation Theory
2.2.1 Relative Perturbation Theory
2.3 Gaussian Elimination
2.4 Error Analysis
2.4.1 The Need for Pivoting
2.4.2 Formal Error Analysis of Gaussian Elimination
2.4.3 Estimating Condition Numbers
2.4.4 Practical Error Bounds
2.5 Improving the Accuracy of a Solution
2.5.1 Single Precision Iterative Refinement
2.5.2 Equilibration
2.6 Blocking Algorithms for Higher Performance
2.6.1 Basic Linear Algebra Subroutines (BLAS)
2.6.2 How to Optimize Matrix Multiplication
2.6.3 Reorganizing Gaussian Elimination to Use Level 3 BLAS
2.6.4 More About Parallelism and Other Performance Issues
2.7 Special Linear Systems
2.7.1 Real Symmetric Positive Definite Matrices
2.7.2 Symmetric Indefinite Matrices
2.7.3 Band Matrices
2.7.4 General Sparse Matrices
2.7.5 Dense Matrices Depending on Fewer Than O(n2) Pa- rameters
2.8 References and Other Topics for Chapter 2
2.9 Questions for Chapter 2
3 Linear Least Squares Problems
3.1 Introduction
3.2 Matrix Factorizations That Solve the Linear Least Squares Problem
3.2.1 Normal Equations
3.2.2 QR Decomposition
3.2.3 Singular Value Decomposition
3.3 Perturbation Theory for the Least Squares Problem
3.4 Orthogonal Matrices
3.4.1 Householder Transformations
3.4.2 Givens Rotations
3.4.3 Roundoff Error Analysis for Orthogonal Matrices
3.4.4 Why Orthogonal Matrices?
3.5 Rank-Deficient Least Squares Problems
3.5.1 Solving Rank-Deficient Least Squares Problems Using the SVD
3.5.2 Solving Rank-Deficient Least Squares Problems Using QR with Pivoting
3.6 Performance Comparison of Methods for Solving Least Squares Problems
3.7 References and Other Topics for Chapter 3
3.8 Questions for Chapter 3
4 Nonsymmetric Eigenvalue Problems
4.1 Introduction
4.2 Canonical Forms
4.2.1 Computing Eigenvectors from the Schur Form
4.3 Perturbation Theory
4.4 Algorithms for the Nonsymmetric Eigenproblem
4.4.1 Power Method
4.4.2 Inverse Iteration
4.4.3 Orthogonal Iteration
4.4.4 QR Iteration
4.4.5 Making QR Iteration Practical
4.4.6 Hessenberg Reduction
4.4.7 TridiagonM and Bidiagonal Reduction
4.4.8 QR Iteration with Implicit Shifts
4.5 Other Nonsymmetric Eigenvalue Problems
4.5.1 Regular Matrix Pencils and Weierstrass Canonical Form
4.5.2 Singular Matrix Pencils and the Kronecker Canonical Form
4.5.3 Nonlinear Eigenvalue Problems
4.6 Summary
4.7 References and Other Topics for Chapter 4
4.8 Questions for Chapter 4
5 The Symmetric Eigenproblem and Singular Value Decomposition
5.1 Introduction
5.2 Perturbation Theory
5.2.1 Relative Perturbation Theory
5.3 Algorithms for the Symmetric Eigenproblem
5.3.1 Tridiagonal QR Iteration
5.3.2 Rayleigh Quotient Iteration
5.3.3 Divide-and-Conquer
5.3.4 Bisection and Inverse Iteration
5.3.5 Jacobi's Method
5.3.6 Performance Comparison
5.4 Algorithms for the Singular Value Decomposition
5.4.1 QR Iteration and Its Variations for the Bidiagonal SVD
5.4.2 Computing the Bidiagonal SVD to High Relative Accuracy
5.4.3 Jacobi's Method for the SVD
5.5 Differential Equations and Eigenvalue Problems
5.5.1 The Toda Lattice
5.5.2 The Connection to Partial Differential Equations
5.6 References and Other Topics for Chapter 5
5.7 Questions for Chapter 5
6 Iterative Methods for Linear Systems
6.1 Introduction
6.2 On-line Help for Iterative Methods
6.3 Poisson's Equation
6.3.1 Poisson's Equation in One Dimension
6.3.2 Poisson's Equation in Two Dimensions
6.3.3 Expressing Poisson's Equation with Kronecker Products
6.4 Summary of Methods for Solving Poisson's Equation
6.5 Basic Iterative Methods
6.5.1 Jacobi's Method
6.5.2 Gauss-Seidel Method
6.5.3 Successive Overrelaxation
6.5.4 Convergence of Jacobi's, Gauss-Seidel, and SOR Methods on the Model Problem
6.5.5 Detailed Convergence Criteria for Jacobi's, Gauss-Seidel, and SOR(w) Methods
6.5.6 Chebyshev Acceleration and Symmetric SOR (SSOR)
6.6 Krylov Subspace Methods
6.6.1 Extracting hfformation about A via Matrix-Vector Mul- tiplication
6.6.2 Solving Ax = b Using the Krylov Subspace
6.6.3 Conjugate Gradient Method
6.6.4 Convergence Analysis of the Conjugate Gradient Method
6.6.5 Preconditioning
6.6.6 Other Krylov Subspace Algorithms for Solving Ax=b.
6.7 Fast Fourier Transform
6.7.1 The Discrete Fourier Transform
6.7.2 Solving the Continuous Model Problem Using Fourier Series
6.7.3 Convolutions
6.7.4 Computing the Fast Fourier Transform
6.8 Block Cyclic Reduction
6.9 Multigrid
6.9.1 Overview of Multigrid on the Two-Dimensional Poisson's Equation
6.9.2 Detailed Description of Multigrid on the One-Dimensiona Poisson's Equation
6.10 Domain Decomposition
6.10.1 Nonoverlapping Methods
6.10.2 Overlapping Methods
6.11 References and Other Topics for Chapter 6
6.12 Questions for Chapter 6
7 Iterative Methods for Eigenvalue Problems
7.1 Introduction
7.2 The Rayleigh-Ritz Method
7.3 The Lanczos Algorithm in Exact Arithmetic
7.4 The Lanczos Algorithm in Floating Point Arithmetic
7.5 The Lanczos Algorithm with Selective Orthogonalization
7.6 Beyond Selective Orthogonalization
7.7 Iterative Algorithms for the Nonsymmetric Eigenproblem
7.8 References and Other Topics for Chapter 7
7.9 Questions for Chapter 7
Bibliography
Index
— 没有更多了 —
以下为对购买帮助不大的评价