Divide-and-conquer eigenvalue algorithms are a class of eigenvalue algorithms for Hermitian or real symmetric matrices that have recently (circa 1990s) become competitive in terms of stability and efficiency with more traditional algorithms such as the QR algorithm. The basic concept behind these algorithms is the divide-and-conquer approach from computer science. An eigenvalue problem is divided into two problems of roughly half the size, each of these are solved recursively, and the eigenvalues of the original problem are computed from the results of these smaller problems.
Here we present the simplest version of a divide-and-conquer algorithm, similar to the one originally proposed by Cuppen in 1981. Many details that lie outside the scope of this article will be omitted; however, without considering these details, the algorithm is not fully stable.
As with most eigenvalue algorithms for Hermitian matrices, divide-and-conquer begins with a reduction to tridiagonal form. For an matrix, the standard method for this, via Householder reflections, takes floating point operations, or if eigenvectors are needed as well. There are other algorithms, such as the Arnoldi iteration, which may do better for certain classes of matrices; we will not consider this further here.
In certain cases, it is possible to deflate an eigenvalue problem into smaller problems. Consider a block diagonal matrix
The eigenvalues and eigenvectors of are simply those of and , and it will almost always be faster to solve these two smaller problems than to solve the original problem all at once. This technique can be used to improve the efficiency of many eigenvalue algorithms, but it has special significance to divide-and-conquer.
For the rest of this article, we will assume the input to the divide-and-conquer algorithm is an real symmetric tridiagonal matrix . Although the algorithm can be modified for Hermitian matrices, we do not give the details here.
The divide part of the divide-and-conquer algorithm comes from the realization that a tridiagonal matrix is "almost" block diagonal.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
This course is an introduction to the non-perturbative bootstrap approach to Conformal Field Theory and to the Gauge/Gravity duality, emphasizing the fruitful interplay between these two ideas.
In linear algebra, an eigenvector (ˈaɪgənˌvɛktər) or characteristic vector of a linear transformation is a nonzero vector that changes at most by a constant factor when that linear transformation is applied to it. The corresponding eigenvalue, often represented by , is the multiplying factor. Geometrically, a transformation matrix rotates, stretches, or shears the vectors it acts upon. The eigenvectors for a linear transformation matrix are the set of vectors that are only stretched, with no rotation or shear.
In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any matrix. It is related to the polar decomposition. Specifically, the singular value decomposition of an complex matrix M is a factorization of the form where U is an complex unitary matrix, is an rectangular diagonal matrix with non-negative real numbers on the diagonal, V is an complex unitary matrix, and is the conjugate transpose of V.
, ,
In this paper, we consider the first eigenvalue.1(O) of the Grushin operator.G :=.x1 + |x1|2s.x2 with Dirichlet boundary conditions on a bounded domain O of Rd = R d1+ d2. We prove that.1(O) admits a unique minimizer in the class of domains with prescribed ...
Given a family of nearly commuting symmetric matrices, we consider the task of computing an orthogonal matrix that nearly diagonalizes every matrix in the family. In this paper, we propose and analyze randomized joint diagonalization (RJD) for performing t ...
The numerical solution of singular eigenvalue problems is complicated by the fact that small perturbations of the coefficients may have an arbitrarily bad effect on eigenvalue accuracy. However, it has been known for a long time that such perturbations are ...