Preface
Table of Contents
North-Holland


COMPUTER SOLUTION
OF LARGE LINEAR SYSTEMS

Gérard MEURANT

North-Holland
1999

Vol. 28, Studies in Mathematics and Its Applications
ISBN 0-444-50169-X, 776 pages



This book covers most of the best algorithms known so far for solving sparse linear systems of equations. It should be useful for engineers and scientists as well as graduate students interested in numerical computations.

Preface

This book covers both direct and iterative methods for solving non-singular sparse linear systems of equations, particularly those arising from the discretization of partial differential equations. Many problems in physics or some other scientific areas lead to solving partial differential equations or systems of partial differential equations. Most of the time these equations are non-linear. The discretization of these equations and iterative methods to deal with the non linearity (like Newton's method) lead to solving sparse linear systems. Very large sparse systems are now solved due to the progresses of numerical simulation and also of computers' speed and increases in the amounts of memory available. So, solving sparse linear systems is a problem of paramount importance for numerical computation. Many methods have been invented over the years. Unfortunately, some of the older methods are not so efficient anymore when applied to very large systems and there is still the need for a very active research in this area. During the last years, there has been a rapid development of parallel computers. Therefore, we put some emphasis on the adaptation of known methods to parallel computations and also on the development of new parallel algorithms.

The first chapter recalls some mathematical results and tools that are needed in the next chapters. The direct methods we consider are different versions of Gaussian elimination and also fast solvers for solving separable partial differential equations on domains with a simple geometry. Chapter 2 is devoted to Gaussian elimination for general linear systems and Chapter 3 focuses on special techniques and variants of Gaussian elimination for sparse systems. Some fast solvers for separable partial differential equations on rectangular domains are described and analyzed in Chapter 4. Classical iterative methods are recalled in Chapter 5. Even though there are not really efficient for solving the problems we consider, they are still of use for constructing preconditioners or for the multigrid method, so it is worthwhile to study these methods. In Chapter 6 the emphasis is put on the conjugate gradient (CG) method which is probably the most efficient method for solving sparse symmetric positive definite linear systems. We also consider some methods for symmetric indefinite problems. In Chapter 7 we consider some of the many generalizations of conjugate gradient that have appeared for non-symmetric matrices. A very important issue for using CG or CG-like methods efficiently is preconditioning. Therefore, in Chapter 8 we describe many of the attempts to derive efficient preconditioners. An introduction to the multigrid method is given in Chapter 9. This method is interesting as it can allow solving a problem with a complexity proportional to the number of unknowns. Chapter 10 describes some domain decomposition and multilevel techniques. This is a very natural framework to use when solving linear systems on parallel computers.

This book grows from lectures which have been given in Paris VI University from 1984 to 1995 and also from research and review papers written since 1982. However, we have tried to give a broad view of the field and to describe state of the art algorithms. Therefore the works of many people are described and quoted in this book, too many for thanking them individually. A large reference list gives the papers and reports we have found interesting and useful over the years.


Table of Contents

1/ Introductory Material [TOC]

1.1.  Vector and matrices norms . . . . . . . . . . . . . . . . . . . 2
1.2.  Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3.  Irreducibility and diagonal dominance. . . . . . . . . . . . . 17
1.4.  M-Matrices and generalizations . . . . . . . . . . . . . . . . 23
1.5.  Splittings . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.6.  Positive definite matrices . . . . . . . . . . . . . . . . . . 30
1.7.  The graph of a matrix. . . . . . . . . . . . . . . . . . . . . 35
1.8.  Chebyshev polynomials. . . . . . . . . . . . . . . . . . . . . 38
1.9.  Discretization methods for partial differential equations. . . 41
1.10. Eigenvalues and Fourier analysis . . . . . . . . . . . . . . . 50
1.11. Floating point arithmetic. . . . . . . . . . . . . . . . . . . 58
1.12. Vector and parallel computers. . . . . . . . . . . . . . . . . 62
1.13. BLAS and LAPACK. . . . . . . . . . . . . . . . . . . . . . . . 65
1.14. Bibliographical comments . . . . . . . . . . . . . . . . . . . 67

2/ Gaussian elimination for general linear systems [TOC]

2.1.  Introduction to Gaussian elimination. . . . . . . . . . . . .  69
2.1.1. Gaussian elimination without permutations. . . . . . . . . .  70
2.1.2. Gaussian elimination with permutations (partial pivoting). .  78
2.1.3. Gaussian elimination with other pivoting strategies. . . . .  80
2.1.4. Operation counts . . . . . . . . . . . . . . . . . . . . . .  81
2.2.  Gaussian elimination for symmetric systems. . . . . . . . . .  82
2.2.1. The outer product algorithm. . . . . . . . . . . . . . . . .  82
2.2.2. The bordering algorithm. . . . . . . . . . . . . . . . . . .  84
2.2.3. The inner product algorithm. . . . . . . . . . . . . . . . .  84
2.2.4. Coding the three factorization algorithms. . . . . . . . . .  85
2.2.5. Positive definite systems. . . . . . . . . . . . . . . . . .  93
2.2.6. Indefinite systems . . . . . . . . . . . . . . . . . . . . .  94
2.3.  Gaussian elimination for H-matrices . . . . . . . . . . . . .  95
2.4.  Block methods . . . . . . . . . . . . . . . . . . . . . . . .  99
2.5.  Tridiagonal and block tridiagonal systems . . . . . . . . . .  99
2.6.  Roundoff error analysis . . . . . . . . . . . . . . . . . . . 111
2.7.  Perturbation analysis . . . . . . . . . . . . . . . . . . . . 117
2.8.  Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
2.9.  Iterative refinement. . . . . . . . . . . . . . . . . . . . . 122
2.10. Parallel solution of general linear systems . . . . . . . . . 123
2.11. Bibliographical comments. . . . . . . . . . . . . . . . . . . 128

3/ Gaussian elimination for sparse linear systems [TOC]

3.1.  Introduction. . . . . . . . . . . . . . . . . . . . . . . . . 131
3.2.  The fill-in phenomenon. . . . . . . . . . . . . . . . . . . . 132
3.3.  Graphs and fill-in for symmetric.matrices . . . . . . . . . . 134
3.4.  Characterization of the fill-in . . . . . . . . . . . . . . . 137
3.5.  Band and envelope numbering schemes for symmetric matrices. . 140
3.5.1. The Cuthill-McKee and reverse Cuthill-McKee orderings. . . . 141
3.5.2. Sloan's algorithm. . . . . . . . . . . . . . . . . . . . . . 146
3.6.  Spectral schemes. . . . . . . . . . . . . . . . . . . . . . . 148
3.6.1. The basic idea . . . . . . . . . . . . . . . . . . . . . . . 148
3.6.2. The multilevel spectral algorithm. . . . . . . . . . . . . . 150
3.6.3. The Kumfert and Pothen hybrid algorithm. . . . . . . . . . . 151
3.6.4. The Boman-Hendrickson multilevel algorithm . . . . . . . . . 152
3.7.  The minimum degree ordering . . . . . . . . . . . . . . . . . 152
3.8.  The nested dissection ordering. . . . . . . . . . . . . . . . 155
3.9.  Generalization of dissection algorithms . . . . . . . . . . . 159
3.9.1. General dissection algorithms. . . . . . . . . . . . . . . . 159
3.9.2. Graph bisection improvement techniques . . . . . . . . . . . 161
3.9.3. The multisection algorithm . . . . . . . . . . . . . . . . . 163
3.10. The multifrontal method . . . . . . . . . . . . . . . . . . . 163
3.11. Non-symmetric sparse matrices . . . . . . . . . . . . . . . . 166
3.12. Numerical stability for sparse matrices . . . . . . . . . . . 169
3.13. Parallel algorithms for sparse matrices . . . . . . . . . . . 170
3.14. Bibliographical comments. . . . . . . . . . . . . . . . . . . 176

4/ Fast solvers for separable PDEs [TOC]

4.1.  Introduction. . . . . . . . . . . . . . . . . . . . . . . . . 177
4.2.  Fast Fourier Transform. . . . . . . . . . . . . . . . . . . . 179
4.2.1. The basics of the FFT. . . . . . . . . . . . . . . . . . . . 179
4.2.2. The complex FFT. . . . . . . . . . . . . . . . . . . . . . . 180
4.2.3. The real transforms. . . . . . . . . . . . . . . . . . . . . 183
4.2.4. FFT on vector and parallel computers . . . . . . . . . . . . 186
4.2.5. Stability of the FFT . . . . . . . . . . . . . . . . . . . . 186
4.2.6. Other algorithms . . . . . . . . . . . . . . . . . . . . . . 187
4.2.7. Double Fourier analysis. . . . . . . . . . . . . . . . . . . 187
4.3.  The Fourier/tridiagonal Method. . . . . . . . . . . . . . . . 187
4.4.  The cyclic reduction method . . . . . . . . . . . . . . . . . 191
4.5.  The FACR(l) method. . . . . . . . . . . . . . . . . . . . . . 202
4.6.  The capacitance matrix method . . . . . . . . . . . . . . . . 203
4.7.  Bibliographical comments. . . . . . . . . . . . . . . . . . . 206

5/ Classical iterative methods [TOC]

5.1.  Introduction. . . . . . . . . . . . . . . . . . . . . . . . . 209
5.2.  The Jacobi method . . . . . . . . . . . . . . . . . . . . . . 210
5.3.  The Gauss-Seidel method . . . . . . . . . . . . . . . . . . . 218
5.4.  The SOR Method. . . . . . . . . . . . . . . . . . . . . . . . 223
5.5.  The SSOR method . . . . . . . . . . . . . . . . . . . . . . . 229
5.6.  Alternating direction methods . . . . . . . . . . . . . . . . 233
5.7.  Richardson methods. . . . . . . . . . . . . . . . . . . . . . 241
5.8.  Acceleration techniques . . . . . . . . . . . . . . . . . . . 246
5.9.  Stability of classical iterative methods. . . . . . . . . . . 248
5.10. Bibliographical comments. . . . . . . . . . . . . . . . . . . 250

6/ The conjugate gradient and related methods [TOC]

6.1.  Derivation of the method. . . . . . . . . . . . . . . . . . . 251
6.2.  Generalization and second form of PCG . . . . . . . . . . . . 256
6.3.  Optimality of PCG . . . . . . . . . . . . . . . . . . . . . . 260
6.4.  The convergence rate of PCG . . . . . . . . . . . . . . . . . 265
6.5.  The Lanczos algorithm . . . . . . . . . . . . . . . . . . . . 281
6.6.  A posteriori error bounds . . . . . . . . . . . . . . . . . . 284
6.7.  The Eisenstat's trick . . . . . . . . . . . . . . . . . . . . 302
6.8.  The Conjugate Residual method . . . . . . . . . . . . . . . . 303
6.9.  SYMMLQ. . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
6.10. The minimum residual method . . . . . . . . . . . . . . . . . 307
6.11. Hybrid algorithms . . . . . . . . . . . . . . . . . . . . . . 308
6.12. Roundoff errors of CG and Lanczos . . . . . . . . . . . . . . 310
6.13. Solving for several right hand sides. . . . . . . . . . . . . 315
6.14. Block CG and Lanczos. . . . . . . . . . . . . . . . . . . . . 319
6.14.1. The block Lanczos algorithm . . . . . . . . . . . . . . . . 319
6.14.2. The Block CG algorithm. . . . . . . . . . . . . . . . . . . 322
6.15. Inner and outer iterations. . . . . . . . . . . . . . . . . . 323
6.16. Constrained CG. . . . . . . . . . . . . . . . . . . . . . . . 324
6.17. Vector and parallel PCG . . . . . . . . . . . . . . . . . . . 325
6.18. Bibliographical comments. . . . . . . . . . . . . . . . . . . 329

7/ Krylov methods for non-symmetric systems [TOC]

7.1.  The normal equations. . . . . . . . . . . . . . . . . . . . . 332
7.2.  The Concus and Golub non-symmetric CG . . . . . . . . . . . . 335
7.3.  Construction of basis for Krylov spaces . . . . . . . . . . . 337
7.3.1. The Arnoldi algorithm. . . . . . . . . . . . . . . . . . . . 337
7.3.2. The Hessenberg algorithm . . . . . . . . . . . . . . . . . . 339
7.3.3. The generalized Hessenberg process . . . . . . . . . . . . . 341
7.4.  FOM and GMRES . . . . . . . . . . . . . . . . . . . . . . . . 342
7.4.1. Definition of FOM and GMRES. . . . . . . . . . . . . . . . . 342
7.4.2. Convergence results. . . . . . . . . . . . . . . . . . . . . 346
7.4.3. Truncated and restarted versions . . . . . . . . . . . . . . 351
7.4.4. Methods equivalent to GMRES. . . . . . . . . . . . . . . . . 351
7.4.5. Methods equivalent to FOM. . . . . . . . . . . . . . . . . . 357
7.5.  Roundoff error analysis of GMRES. . . . . . . . . . . . . . . 357
7.6.  Extensions to GMRES . . . . . . . . . . . . . . . . . . . . . 360
7.6.1. Flexible GMRES . . . . . . . . . . . . . . . . . . . . . . . 360
7.6.2. GMRES*     . . . . . . . . . . . . . . . . . . . . . . . . . 361
7.7.  Hybrid GMRES algorithms . . . . . . . . . . . . . . . . . . . 363
7.8.  The non-symmetric Lanczos algorithm . . . . . . . . . . . . . 364
7.8.1. Definition of the non-symmetric Lanczos algorithm. . . . . . 365
7.8.2. Variants of the non-symmetric Lanczos algorithm. . . . . . . 367
7.8.3. Maintaining semi bi-orthogonality. . . . . . . . . . . . . . 368
7.9.  The BiConjugate Gradient Algorithm. . . . . . . . . . . . . . 370
7.10. Roundoff error analysis of BiCG . . . . . . . . . . . . . . . 373
7.11. Handling of breakdowns. . . . . . . . . . . . . . . . . . . . 374
7.11.1. FOP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
7.11.2. Padé approximation. . . . . . . . . . . . . . . . . . . . . 376
7.11.3. Block bi-orthogonality. . . . . . . . . . . . . . . . . . . 378
7.11.4. Modified Krylov spaces. . . . . . . . . . . . . . . . . . . 379
7.12. The Conjugate Gradient Squared algorithm. . . . . . . . . . . 379
7.13. Extensions of BiCG. . . . . . . . . . . . . . . . . . . . . . 382
7.14. The Quasi Minimal Residual algorithm. . . . . . . . . . . . . 387
7.15. CMRH . .    . . . . . . . . . . . . . . . . . . . . . . . . . 390
7.16. Which method to use?. . . . . . . . . . . . . . . . . . . . . 391
7.17. Complex linear systems. . . . . . . . . . . . . . . . . . . . 392
7.18. Krylov methods on parallel computers. . . . . . . . . . . . . 394
7.19. Bibliographical comments. . . . . . . . . . . . . . . . . . . 394

8/ Preconditioning [TOC]

8.1.  Introduction. . . . . . . . . . . . . . . . . . . . . . . . . 397
8.2.  The diagonal preconditioner . . . . . . . . . . . . . . . . . 399
8.3.  The SSOR preconditioner . . . . . . . . . . . . . . . . . . . 401
8.3.1. Definition of SSOR . . . . . . . . . . . . . . . . . . . . . 401
8.3.2. Convergence results for SSOR . . . . . . . . . . . . . . . . 402
8.3.3. Fourier analysis of SSOR . . . . . . . . . . . . . . . . . . 405
8.4.  The block SSOR preconditioner . . . . . . . . . . . . . . . . 407
8.4.1. Definition of BSSOR. . . . . . . . . . . . . . . . . . . . . 407
8.4.2. Analysis of BSSOR. . . . . . . . . . . . . . . . . . . . . . 407
8.4.3. Fourier analysis of BSSOR. . . . . . . . . . . . . . . . . . 409
8.5.  The incomplete Cholesky decomposition . . . . . . . . . . . . 416
8.5.1. The general decomposition. . . . . . . . . . . . . . . . . . 416
8.5.2. Incomplete decomposition of H-matrices . . . . . . . . . . . 418
8.5.3. Incomplete decomposition of non-symmetric matrices . . . . . 422
8.5.4. Different incomplete decomposition strategies. . . . . . . . 423
8.5.5. Finite difference matrices . . . . . . . . . . . . . . . . . 424
8.5.6. Fourier analysis of IC(1,1). . . . . . . . . . . . . . . . . 427
8.5.7. Comparison of periodic and Dirichlet boundary conditions . . 430
8.5.8. Axelsson's results . . . . . . . . . . . . . . . . . . . . . 441
8.6.  The modified incomplete Cholesky decomposition. . . . . . . . 442
8.6.1. The DKR preconditioner . . . . . . . . . . . . . . . . . . . 442
8.6.2. Analysis of DKR. . . . . . . . . . . . . . . . . . . . . . . 443
8.6.3. Fourier analysis of DKR. . . . . . . . . . . . . . . . . . . 446
8.6.4. Extensions of DKR. . . . . . . . . . . . . . . . . . . . . . 448
8.7.  The relaxed incomplete Cholesky decomposition . . . . . . . . 448
8.8.  More on the incomplete decompositions for the model problem . 449
8.9.  Stability of incomplete decomposition . . . . . . . . . . . . 452
8.10. The generalized SSOR preconditioner . . . . . . . . . . . . . 454
8.11. Incomplete decomposition of positive definite matrices. . . . 458
8.12. Different orderings for IC. . . . . . . . . . . . . . . . . . 461
8.12.1. Experimental results. . . . . . . . . . . . . . . . . . . . 461
8.12.2. Theory for model problems . . . . . . . . . . . . . . . . . 467
8.12.3. Value dependent orderings . . . . . . . . . . . . . . . . . 470
8.12.4. Multicolor orderings. . . . . . . . . . . . . . . . . . . . 471
8.13. The repeated Red-Black decomposition. . . . . . . . . . . . . 472
8.13.1. Description of the methods. . . . . . . . . . . . . . . . . 472
8.13.2. Analysis of RRB . . . . . . . . . . . . . . . . . . . . . . 475
8.14. The block incomplete Cholesky decomposition . . . . . . . . . 479
8.14.1. Block tridiagonal matrices. . . . . . . . . . . . . . . . . 479
8.14.2. Pointwise equivalent decomposition. . . . . . . . . . . . . 480
8.14.3. The modified incomplete block decomposition . . . . . . . . 481
8.14.4. Block incomplete decomposition for H-matrices . . . . . . . 482
8.14.5. Generalization of the block incomplete decomposition. . . . 482
8.14.6. Fourier analysis of INV and MINV. . . . . . . . . . . . . . 482
8.14.7. Axelsson's results. . . . . . . . . . . . . . . . . . . . . 493
8.14.8. Block-size reduction. . . . . . . . . . . . . . . . . . . . 494
8.15. The block Cholesky decomposition for 3D problems. . . . . . . 494
8.15.1. Point preconditioners . . . . . . . . . . . . . . . . . . . 495
8.15.2. 1D block preconditioners. . . . . . . . . . . . . . . . . . 496
8.15.3. 2D point preconditioners. . . . . . . . . . . . . . . . . . 497
8.15.4. 2D block preconditioners. . . . . . . . . . . . . . . . . . 499
8.16. Nested factorization. . . . . . . . . . . . . . . . . . . . . 501
8.16.1. The ACP preconditioner for 3D problems. . . . . . . . . . . 502
8.16.2. The preconditioner of Appleyard, Cheshire and Pollard . . . 502
8.16.3. Improvements of ACP . . . . . . . . . . . . . . . . . . . . 503
8.17. Sparse approximate inverses . . . . . . . . . . . . . . . . . 504
8.17.1. The sparse inverses of Huckle and Grote . . . . . . . . . . 505
8.17.2. The sparse inverses of Gould and Scott. . . . . . . . . . . 506
8.17.3. The sparse inverses of Chow and Saad. . . . . . . . . . . . 506
8.17.4. Sparse approximate inverses for symmetric matrices. . . . . 507
8.17.5. The sparse inverses of Benzi, Meyer and Tuma. . . . . . . . 507
8.18. Polynomial preconditioners. . . . . . . . . . . . . . . . . . 509
8.18.1. Truncated Neumann series. . . . . . . . . . . . . . . . . . 510
8.18.2. The minmax polynomial . . . . . . . . . . . . . . . . . . . 511
8.18.3. Least squares polynomials . . . . . . . . . . . . . . . . . 514
8.18.4. Stable evaluation of polynomials. . . . . . . . . . . . . . 520
8.18.5. A polynomial independent of eigenvalue estimates. . . . . . 525
8.18.6. Adaptive algorithms for SPD matrices. . . . . . . . . . . . 526
8.18.7. Polynomials for symmetric indefinite problems . . . . . . . 527
8.18.8. Polynomials for non-symmetric problems. . . . . . . . . . . 528
8.19. Double preconditioners. . . . . . . . . . . . . . . . . . . . 528
8.20. Other ideas . . . . . . . . . . . . . . . . . . . . . . . . . 529
8.20.1. ADI preconditioner. . . . . . . . . . . . . . . . . . . . . 529
8.20.2. ADDKR preconditioner. . . . . . . . . . . . . . . . . . . . 530
8.20.3. Element by element preconditioner . . . . . . . . . . . . . 530
8.20.4. Fast solvers. . . . . . . . . . . . . . . . . . . . . . . . 531
8.20.5. Wavelets  . . . . . . . . . . . . . . . . . . . . . . . . . 531
8.21. Vector and parallel computing . . . . . . . . . . . . . . . . 532
8.21.1. Vectorization of IC(1,1). . . . . . . . . . . . . . . . . . 533
8.21.2. Parallel orderings. . . . . . . . . . . . . . . . . . . . . 534
8.21.3. Vectorization of INV. . . . . . . . . . . . . . . . . . . . 535
8.21.4. Twisted incomplete block factorizations . . . . . . . . . . 535
8.21.5. Incomplete block cyclic reduction . . . . . . . . . . . . . 537
8.21.6. A massively parallel preconditioner . . . . . . . . . . . . 538
8.22. Bibliographical comments. . . . . . . . . . . . . . . . . . . 539

9/ Multigrid methods [TOC]

9.1.  Introduction. . . . . . . . . . . . . . . . . . . . . . . . . 541
9.2.  The two-grid method . . . . . . . . . . . . . . . . . . . . . 542
9.3.  A one dimensional example . . . . . . . . . . . . . . . . . . 546
9.3.1. The choice of the smoothing. . . . . . . . . . . . . . . . . 547
9.3.2. The choice of the restriction. . . . . . . . . . . . . . . . 549
9.3.3. The choice of prolongation . . . . . . . . . . . . . . . . . 549
9.3.4. The choice of the coarse grid matrix . . . . . . . . . . . . 550
9.4.  The choices of components . . . . . . . . . . . . . . . . . . 557
9.4.1. The smoothing. . . . . . . . . . . . . . . . . . . . . . . . 557
9.4.2. The coarsening . . . . . . . . . . . . . . . . . . . . . . . 560
9.4.3. Grid transfers . . . . . . . . . . . . . . . . . . . . . . . 561
9.4.4. The coarse grid operator . . . . . . . . . . . . . . . . . . 563
9.5.  The multigrid method. . . . . . . . . . . . . . . . . . . . . 563
9.6.  Convergence theory. . . . . . . . . . . . . . . . . . . . . . 567
9.7.  Complexity of multigrid . . . . . . . . . . . . . . . . . . . 571
9.8.  The full multigrid method . . . . . . . . . . . . . . . . . . 573
9.9.  Vector and parallel multigrid . . . . . . . . . . . . . . . . 576
9.10. Algebraic multigrid . . . . . . . . . . . . . . . . . . . . . 579
9.11. Bibliographical comments. . . . . . . . . . . . . . . . . . . 583

10/ Domain decomposition and multilevel methods [TOC]

10.1. Introduction to domain decomposition. . . . . . . . . . . . . 585
10.2. Schwarz methods . . . . . . . . . . . . . . . . . . . . . . . 587
10.2.1. The classical Schwarz alternating method. . . . . . . . . . 587
10.2.2. The matrix form of the Schwarz alternating method . . . . . 589
10.2.3. The rate of convergence . . . . . . . . . . . . . . . . . . 592
10.2.4. Other boundary conditions . . . . . . . . . . . . . . . . . 595
10.2.5. Parallelizing multiplicative Schwarz. . . . . . . . . . . . 595
10.2.6. The additive Schwarz method . . . . . . . . . . . . . . . . 596
10.2.7. Adding a coarse mesh correction . . . . . . . . . . . . . . 597
10.3. An additive Schwarz preconditioner for parabolic problems . . 597
10.4. Algebraic domain decomposition methods without overlapping. . 600
10.4.1. Exact solvers for the subdomains. . . . . . . . . . . . . . 601
10.4.2. Approximate solvers for the subdomains. . . . . . . . . . . 604
10.5. Approximate Schur complements in the two subdomains case. . . 606
10.5.1. The Schur complement for block tridiagonal matrices . . . . 607
10.5.2. Eigenvalues of the Schur complement for separable problems. 608
10.5.3. Dryja's preconditioner. . . . . . . . . . . . . . . . . . . 613
10.5.4. Golub and Mayers' preconditioner. . . . . . . . . . . . . . 614
10.5.5. The Neumann-Dirichlet preconditioner. . . . . . . . . . . . 615
10.5.6. The Neumann-Neumann preconditioner. . . . . . . . . . . . . 617
10.5.7. Dependence on the aspect ratio. . . . . . . . . . . . . . . 618
10.5.8. Dependence on the coefficients. . . . . . . . . . . . . . . 620
10.5.9. Probing   . . . . . . . . . . . . . . . . . . . . . . . . . 621
10.5.10. INV and MINV approximations. . . . . . . . . . . . . . . . 623
10.5.11. The Schur complement for more general problems . . . . . . 625
10.6. Approximations of Schur complements with many subdomains. . . 627
10.7. Inexact subdomain solvers . . . . . . . . . . . . . . . . . . 632
10.8. Domain decomposition with boxes . . . . . . . . . . . . . . . 637
10.8.1. The Bramble, Pasciak and Schatz preconditioner. . . . . . . 638
10.8.2. Vertex space preconditioners. . . . . . . . . . . . . . . . 642
10.9. A block Red-Black DD preconditioner . . . . . . . . . . . . . 643
10.10. Multilevel preconditioners . . . . . . . . . . . . . . . . . 646
10.10.1. Additive multilevel Schwarz preconditioners. . . . . . . . 647
10.10.2. Multilevel ILU preconditioners . . . . . . . . . . . . . . 648
10.11. Bibliographical comments . . . . . . . . . . . . . . . . . . 655

References [TOC]

There are 1368 references (90 pages).