Back to Results
Cover image of Matrix Computations and Semiseparable Matrices
Cover image of Matrix Computations and Semiseparable Matrices
Share this Title:

Matrix Computations and Semiseparable Matrices

Linear Systems

Raf Vandebril, Marc Van Barel, and Nicola Mastronardi

Volume
Volume 1
Publication Date
Binding Type

In recent years several new classes of matrices have been discovered and their structure exploited to design fast and accurate algorithms. In this new reference work, Raf Vandebril, Marc Van Barel, and Nicola Mastronardi present the first comprehensive overview of the mathematical and numerical properties of the family's newest member: semiseparable matrices.

The text is divided into three parts. The first provides some historical background and introduces concepts and definitions concerning structured rank matrices. The second offers some traditional methods for solving systems of equations...

In recent years several new classes of matrices have been discovered and their structure exploited to design fast and accurate algorithms. In this new reference work, Raf Vandebril, Marc Van Barel, and Nicola Mastronardi present the first comprehensive overview of the mathematical and numerical properties of the family's newest member: semiseparable matrices.

The text is divided into three parts. The first provides some historical background and introduces concepts and definitions concerning structured rank matrices. The second offers some traditional methods for solving systems of equations involving the basic subclasses of these matrices. The third section discusses structured rank matrices in a broader context, presents algorithms for solving higher-order structured rank matrices, and examines hybrid variants such as block quasiseparable matrices. An accessible case study clearly demonstrates the general topic of each new concept discussed. Many of the routines featured are implemented in Matlab and can be downloaded from the Web for further exploration.

Reviews

Reviews

In particular, the relation with algorithms of historical interest will make the book interesting for its readers.

An indispensable tool for scholars and research workers in mathematics and the mathematical sciences.

About

Book Details

Publication Date
Status
Available
Trim Size
7
x
9.25
Pages
584
ISBN
9780801896798
Illustration Description
7 halftones, 75 line drawings
Table of Contents

Preface
Notation
1. Introduction to semiseparable matrices
1.1. Definition of semiseparablematrices
1.2. Some properties
1.2.1. Relations under inversion
1.2.2. Generator representable semiseparable

Preface
Notation
1. Introduction to semiseparable matrices
1.1. Definition of semiseparablematrices
1.2. Some properties
1.2.1. Relations under inversion
1.2.2. Generator representable semiseparable matrices
1.3. The representations
1.3.1. The generator representation
1.3.2. The Givens-vector representation
1.4. Conclusions
I. The reduction of matrices
2. Algorithms for reducing matrices
2.1. Introduction
2.1.1. Equivalence transformations
2.1.2. Orthogonal transformations
2.2. Orthogonal similarity transformations of symmetric matrices
2.2.1. To tridiagonal form
2.2.2. To semiseparable form
2.2.3. To semiseparable plus diagonal form
2.3. Orthogonal similarity transformation of (unsymmetric) matrices
2.3.1. To Hessenberg form
2.3.2. To Hessenberg-like form
2.3.3. To Hessenberg-like plus diagonal form
2.4. Orthogonal transformations of matrices
2.4.1. To upper (lower) bidiagonal form
2.4.2. To upper (lower) triangular semiseparable form
2.4.3. Relation with the reduction to semiseparable form
2.5. Transformations from sparse to structured rank form
2.5.1. From tridiagonal to semiseparable (plus diagonal)
2.5.2. From bidiagonal to upper triangular semiseparable
2.6. From structured rank to sparse form
2.6.1. From semiseparable (plus diagonal) to tridiagonal
2.6.2. From semiseparable to bidiagonal
2.7. Conclusions
3. Convergence properties of the reduction algorithms
3.1. The Arnoldi(Lanczos)-Ritz values
3.1.1. Ritz values and Arnoldi(Lanczos)-Ritz values
3.1.2. The reduction to semiseparable form
3.1.3. Necessary conditions to obtain the Ritz values
3.1.4. Sufficient conditions to obtain the Ritz values
3.1.5. The case of invariant subspaces
3.1.6. Some general remarks
3.1.7. The different reduction algorithms revisited
3.2. Subspace iteration inside the reduction algorithms
3.2.1. The reduction to semiseparable form
3.2.2. The reduction to semiseparable plus diagonal form
3.2.3. Nested multishift iteration
3.2.4. The different reduction algorithms revisited
3.3. Interaction of the convergence behaviors
3.3.1. The reduction to semiseparable form
3.3.2. The reduction to semiseparable plus diagonal form
3.3.3. Convergence speed of the nested multishift iteration
3.3.4. The other reduction algorithms
3.4. Conclusions
4. Implementation of the algorithms and numerical experiments
4.1. Working with Givens transformations
4.1.1. Graphical schemes
4.1.2. Interaction of Givens transformations
4.1.3. Updating the representation
4.1.4. The reduction algorithm revisited
4.2. Implementation details
4.2.1. Reduction to symmetric semiseparable form
4.2.2. Reduction to semiseparable plus diagonal form
4.2.3. Reduction to Hessenberg-like
4.2.4. Reduction to upper triangular semiseparable form
4.2.5. Computational complexities of the algorithms
4.3. Numerical experiments
4.3.1. The reduction to semiseparable form
4.3.2. The reduction to semiseparable plus diagonal form
4.4. Conclusions
II. QR-algorithms (eigenvalue problems)
5. Introduction: traditional sparse QR-algorithms
5.1. On the QR-algorithm
5.1.1. The QR-factorization
5.1.2. The QR-algorithm
5.2. A QR-algorithmfor sparsematrices
5.2.1. The QR-factorization
5.2.2. Maintaining the Hessenberg structure
5.3. An implicit QR-method for sparsematrices
5.3.1. An implicit QR-algorithm
5.3.2. Bulge chasing
5.3.3. The implicit Q-theorem
5.4. On computing the singular values
5.5. Conclusion
6. Theoretical results for structured rank QR-algorithms
6.1. Preserving the structure under a QR-step
6.1.1. The QR-factorization
6.1.2. A QR-step maintains the rank structure
6.2. An implicit Q-theorem
6.2.1. Unreduced Hessenberg-like matrices
6.2.2. The reduction to unreduced Hessenberg-like form
6.2.3. An implicit Q-theorem for Hessenberg-like matrices
6.3. On Hessenberg-like plus diagonal matrices
6.4. Conclusions
7. Implicit QR-methods for semiseparable matrices
7.1. An implicit QR-algorithm for symmetric semiseparable matrices
7.1.1. Unreduced symmetric semiseparable matrices
7.1.2. The shift
7.1.3. An implicit QR-step
7.1.4. Proof of the correctness of the implicit approach
7.2. A QR-algorithm for semiseparable plus diagonal
7.3. An implicit QR-algorithm for Hessenberg-like matrices
7.3.1. The shift
7.3.2. An implicit QR-step on the Hessenberg-like matrix
7.3.3. Chasing the disturbance
7.3.4. Proof of correctness
7.4. An implicit QR-algorithm for computing the singular values
7.4.1. Unreduced upper triangular semiseparable matrices
7.4.2. An implicit QR-step
7.4.3. Chasing the bulge
7.4.4. Proof of correctness
7.5. Conclusions
8. Implementation and numerical experiments
8.1. Working with Givens transformations
8.1.1. Interaction of Givens transformations
8.1.2. Graphical interpretation of a QR-step
8.1.3. A QR-step for semiseparable plus diagonal matrices
8.2. Implementation of the QR-algorithm for semiseparable matrices
8.2.1. The QR-algorithm without shift
8.2.2. The reduction to unreduced form
8.2.3. The QR-algorithmwith shift
8.2.4. Deflation after a step of the QR-algorithm
8.3. Computing the eigenvectors
8.3.1. Computing all the eigenvectors
8.3.2. Selected eigenvectors
8.3.3. Preventing the loss of orthogonality
8.3.4. The eigenvectors of an arbitrary symmetric matrix
8.4. Numerical experiments
8.4.1. On the symmetric eigenvalue solver
8.4.2. Experiments for the singular value decomposition
8.5. Conclusions
9. More on QR-related algorithms
9.1. Complex arithmetic and Givens transformations
9.2. Variations of the QR-algorithm
9.2.1. The QR-factorization and its variants
9.2.2. Flexibility in the QR-algorithm
9.2.3. The QR-algorithm and its variants
9.3. The QR-method for quasiseparable matrices
9.3.1. Definition and properties
9.3.2. The QR-factorization and the QR-algorithm
9.3.3. The implicit method
9.4. The multishift QR-algorithm
9.4.1. The multishift setting
9.4.2. An efficient transformation from v to ße1
9.4.3. The chasing method
9.4.4. The real Hessenberg-like case
9.5. A QH-algorithm
9.5.1. More on the QH-factorization
9.5.2. Convergence and preservation of the structure
9.5.3. An implicit QH-iteration
9.5.4. The QR-iteration is a disguised QH-iteration
9.5.5. Numerical experiments
9.6. Computing zeros of polynomials
9.6.1. Connection to eigen problems
9.6.2. Unitary Hessenberg matrices
9.6.3. Unitary plus rank 1 matrices
9.6.4. Other methods
9.7. References to related subjects
9.7.1. Rational Krylov methods
9.7.2. Sturm sequences
9.7.3. Other references
9.8. Conclusions
III. Some generalizations and miscellaneous topics
10. Divide-and-conquer algorithms for the eigen decomposition
10.1. Arrowhead and diagonal plus rank 1 matrices
10.1.1. Symmetric arrowhead matrices
10.1.2. Computing the eigen vectors
10.1.3. Rank 1 modification of a diagonal matrix
10.2. Divide-and-conquer algorithms for tridiagonal matrices
10.2.1. Transformation into a similar arrowhead matrix
10.2.2. Transformation into a diagonal plus rank 1
10.3. Divide-and-conquer methods for quasiseparable matrices
10.3.1. A first divide-and-conquer algorithm
10.3.2. A straightforward divide-and-conquer algorithm
10.3.3. A one-way divide-and-conquer algorithm
10.3.4. A two-way divide-and-conquer algorithm
10.4. Computational complexity and numerical experiments
10.5. Conclusions
11. A Lanczos-type algorithm and rank revealing
11.1. Lanczos semiseparabilization
11.1.1. Lanczos reduction to tridiagonal form
11.1.2. Lanczos reduction to semiseparable form
11.2. Rank-revealing properties of the orthogonal similarity reduction
11.2.1. Symmetric rank-revealing factorization
11.2.2. Rank-revealing via the semiseparable reduction
11.2.3. Numerical experiments
11.3. Conclusions
IV. Orthogonal (rational) functions (Inverse eigenvalue problems)
12. Orthogonal polynomials and discrete least squares
12.1. Recurrence relation and Hessenberg matrix
12.2. Discrete inner product
12.3. Inverse eigenvalue problem
12.4. Polynomial least squares approximation
12.5. Updating algorithm
12.6. Special cases
12.7. Conclusions
13. Orthonormal polynomial vectors
13.1. Vector approximants
13.2. Equal degrees
13.2.1. The optimization problem
13.2.2. The algorithm
13.2.3. Summary
13.3. Arbitrary degrees
13.3.1. The problem
13.3.2. The algorithm
13.3.3. Orthogonal vector polynomials
13.3.4. Solution of the general approximation problem
13.2. The singular case
13.5. Conclusions
14. Orthogonal rational functions
14.1. The computation of orthonormal rational functions
14.1.1. The functional problem
14.1.2. The inverse eigenvalue problem
14.1.3. Recurrence relation for the columns of Q
14.1.4. Recurrence relation for the orthonormal functions
14.2. Solving the inverse eigenvalue problem
14.3. Special configurations of points zi
14.3.1. Special case: all points zi are real
14.3.2. Special case: all points zi lie on the unit circle
14.3.3. Special case: all points zi lie on a generic circle
14.4. Conclusions
15. Concluding remarks & software
15.1. Software
15.2. Conclusions
Bibliography
Author/ Editor Index
Subject Index

Author Bios
Featured Contributor

Raf Vandebril, Ph.D.

Raf Vandebril is a researcher in the Department of Computer Science at the Catholic University of Leuven, Belgium.
Resources

Additional Resources