目录 Preface to the Series in Information and Computational Science Preface Chapter 1 Introduction1 1.1 Background in linear algebra1 1.1.1 Basic symbols, notations, and de.nitions 1 1.1.2 Vector norm 2 1.1.3Matrix norm 4 1.1.4 Spectral radii 8 1.2 Spectralresultsofmatrix10 1.3 Specialmatrices15 1.3.1 Reducible and irreducible matrices 15 1.3.2 Diagonally dominant matrices 16 1.3.3 Nonnegative matrices 20 1.3.4 p-cyclic matrices 22 1.3.5 Toeplitz, Hankel, Cauchy, Cauchy-like and Hessenberg matrices 24 1.4 Matrix decomposition27 1.4.1 LU decomposition 27 1.4.2 Singular value decomposition 28 1.4.3 Conjugate decomposition 30 1.4.4 QZ decomposition 32 1.4.5 S & T decomposition 33 1.5 Exercises37 Chapter 2 Basic Methods and Convergence40 2.1 Basic concepts40 2.2 The Jacobi method43 2.3 The Gauss-Seidel method46 2.4 The SOR method49 2.5 M-matrices and splitting methods58 2.5.1 M-matrix 58 2.5.2 Splitting methods 60 2.5.3 Comparison theorems 62 2.5.4 Multi-splitting methods 66 2.5.5 Generalized Ostrowski-Reich theorem 67 2.6 Error analysis of iterative methods69 27 Iterative re.nement70 2.8 Exercises75 Chapter 3 Non-stationary Methods 78 3.1 Conjugategradientmethods79 3.1.1 Steepest descent method 79 3.1.2 Conjugate gradient method 80 3.1.3 Preconditioned conjugate gradient method 83 3.1.4 Generalized conjugate gradient method 85 3.1.5 Theoretical results on the conjugate gradient method85 3.1.6 Generalized product-type methods based on Bi-CG 91 3.1.7 Inexact preconditioned conjugate gradient method 92 3.2 Lanczos method93 3.3 GMRES method and QMR method95 3.3.1 GMRES method 95 3.3.2 QMR method 98 3.3.3 Variants of the QMR method 100 3.4 Direct projection method101 3.4.1 Theory of the direct projection method 102 3.4.2 Direct projection algorithms 105 3.5 Semi-conjugate direction method 107 3.5.1 Semi-conjugate vectors 107 3.5.2 Left conjugate direction method 110 3.5.3 One possible way to .nd left conjugate vector set 112 3.5.4 Remedy for breakdown 117 3.5.5 Relation with Gaussian elimination 119 3.6 Krylov subspace methods121 3.7 Exercises122 Chapter 4 Iterative Methods for Least Squares Problems126 4.1 Introduction126 4.2 Basic iterative methods128 4.3 BlockSOR methods131 4.3.1 Block SOR algorithms 131 4.3.2 Convergence and optimal factors 132 4.3.3 Example 135 4.4 Preconditioned conjugate gradient methods136 4.5 Generalized least squares problems138 4.5.1 Block SOR methods 139 4.5.2 Preconditioned conjugate gradient method 142 4.5.3 Comparison 143 4.5.4 SOR-like methods 144 4.6 Rank de.cient problems148 4.6.1 Augmented system of normal equation 149 4.6.2 Block SOR algorithms 150 4.6.3 Convergence and optimal factor 151 4.6.4 Preconditioned conjugate gradient method 154 4.6.5 Comparison results 158 4.7 Exercises161 Chapter 5 Preconditioners 163 5.1 LU decomposition and orthogonal transformations164 5.1.1 Gilbert and Peierls algorithm for LU decomposition 164 5.1.2 Orthogonal transformations 166 5.2 Stationary preconditioners167 5.2.1 Jacobi preconditioner 167 5.2.2 SSOR preconditioner 168 5.3 Incompletefactorization169 5.3.1 Point incomplete factorization 170 5.3.2 Modi.ed incomplete factorization 172 5.3.3 Block incomplete factorization 172 5.4 Diagonally dominant preconditioner173 5.5 Preconditionerforleastsquaresproblems177 5.5.1 Preconditioner by LU decomposition 179 5.5.2 Preconditioner by direct projection method 181 5.5.3 Preconditioner by QR decomposition 182 5.6 Exercises186 Chapter 6 Singular Linear Systems 188 6.1 Introduction188 6.2 Properties of singular systems191 6.3 Splittingmethodsforsingularsystems195 6.4 NonstationarymethodsforSingularsystems219 6.4.1 symmetric and positive semide.nite systems 219 6.4.2 General systems222 6.5 Exercises225 Bibliography 228 Index 249 《信息与计算科学丛书》253
内容摘要 Chapter 1 Introduction In this chapter, we first give an overview of relevant concepts and some basic results of matrix linear algebra. Materials contained here will be used throughout the book. 1.1 Background in linear algebra 1.1.1 Basic symbols, notations, and definitions Let R be the set of real numbers; C,the set of complex numbers; and i 三 /^T. The symbol Rn denotes the set of real n-vectors and Cn, the set of complex n-vectors, a, /3, 7,etc., denote real numbers or constants. Vectors are almost always column vectors. We use Rmxn to denote the linear vector space of all m-by-n real matrices and Cmxn, the linear vector space of all m-by-n complex matrices. The symbol dimp) denotes the dimension of a linear vector space S. The upper case letters A, B, C, A, A, etc., denote matrices and the lower case letters x, y, z, etc., denote vectors. Let (A)ij = ctij denote the (i, j)th entry in a matrix A = (aij). For any n-by-n matrix, the indices j go through 1 to n usually but sometimes go through 0 to n — 1 for convenience. Let AT be the transpose of A; A*, the conjugate transpose of A\ rank(yl), the rank of A\ and tr(A)三the trace of A. An n-by-n diagonal matrix is denoted by We use the notation In for the n-by-n identity matrix. When there is no ambiguity, we shall write it as I. The symbol ej denotes the jth unit vector, i.e., the jth column vector of I. A matrix A G Rnxn is symmetric if AT = A, and skew-symmetric if AT = —A. A symmetric matrix A is positive definite (semidefinite) if xTAx > 00) for any nonzero vector x G Rn. A matrix A G Cnxn is Hermitian if A* = A. A Hermitian matrix A is positive definite (semidefinite) if x*Ax ≥ 0( 0) for any nonzero vector x e Cn. A number A e C is an eigenvalue of A G Cnxn if there exists a nonzero vector x G Cn such that Ax = Xx, where x is called the eigenvector of A associated with A. It is well-known that the eigenvalues of all Hermitian matrices are real. Let Amin (A) and Amax(A) denote the smallest and largest eigenvalues of a Hermitian matrix A respectively. We use p(A) = max |Ai(A)| to denote the spectral radius of A where Ai(A) goes through the spectrum of A. Recall that the spectrum of A is the set of all the eigenvalues of A. We use to denote a norm of vector or matrix. The symbols||oo denote the p-novm with p = 1,2, oo, respectively. Also we use ?a(A), which is defined by Ka(A) = ||A||a||A_1||a to denote the condition number of the matrix A. In general, we consider every norm at the definition when a is omitted. But most used norm is 2-norm. We use and 1Z(A) to represent the null space and Image space (or Range) of given matrix A respectively where = {x G Rn : Ax = 0} and 1^(A) = {y G Rm : y = Ax for some x G Rn} and A G Rmxn. For matrix iterative analysis, we need some tools, such as vector norms, matrix norms and their extensions, and spectral radii. 1.1.2 Vector norm In fact, a norm is an extension of length of vector in R2 or absolute value in R. It is well-known that Vx G R, \x\ = satisfies the following properties: We generalize three properties above to vector space Rn as follows. Definition 1.1.1 /i : Rn —j- R is a vector norm on Rn if Example 1.1.1 There are three common norms on Rn defined by There axe some important elementary consequences from Definition 1.1.1 of the vector norm. Proposition 1.1.1 Proof Then, By interchanging x and y, we can obtain The result of (1.1.1) follows from (1.1.3) and (1.1.4) together. We can prove (1.1.2) if y is replaced by —y in (1.1.1). The 2-norm is the natural generalization of the Euclidean length of vector on R2 or R3 and called the Euclidean norm. The oo-norm also sometimes called the maximum norm or the Chebyshev norm. In fact, they are special cases of p-norm defined as , Sometimes, usual norm is not enough for our research. We have to construct a new norm. One useful technique to construct new norms from some well-known norm is given in the following theorem. Theorem 1.1.2 Let v be a norm on Rm and A E Rmxn have linearly inde?pendent columns. Then /i(x) = u(Ax) : Rn is a norm on Rn. The proof is easy, just to check properties of the norm in Definition 1.1.1. Leave it to reader. This technique can work for matrix norm in the next subsection. Corollary 1.1.3 Let A G RnXn be symmetric and positive definite. Then, /i(x) = VxTAx is a norm on Rn? denoted ||尤||^4,and called weighted norm (with A). We have to know if the sequence generated by iterative methods converges to the solution when we study iterative methods. For this purpose, we shall give some concepts about limit of sequence in vector spaces. Definition 1.1.2 Let {x(fc)} be a sequence of n-vectors,and x G Rn. Then, x is a limit of the sequence {x(fc)} (written x = limfc_,00 x^) if where Xi(i = 1,2, … ,n) are components of x. By the definition, Furthermore, it follows from equivalence of vector norms that x = lim a;⑷ lim "(x — a:⑷)=0, where " is a norm on Rn. 1.1.3 Matrix norm Definition 1.1.3 : Rmxn — R is a matrix norm on Rmxn if Example 1.1.2 Let A
以下为对购买帮助不大的评价