In mathematics, more specifically in numerical linear algebra, the biconjugate gradient method is an algorithm to solve systems of linear equations
Unlike the conjugate gradient method, this algorithm does not require the matrix to be self-adjoint, but instead one needs to perform multiplications by the conjugate transpose A*.
Choose initial guess , two other vectors and and a preconditioner
for do
In the above formulation, the computed and satisfy
and thus are the respective residuals corresponding to and , as approximate solutions to the systems
is the adjoint, and is the complex conjugate.
Choose initial guess ,
for do
The biconjugate gradient method is numerically unstable (compare to the biconjugate gradient stabilized method), but very important from a theoretical point of view. Define the iteration steps by
where using the related projection
with
These related projections may be iterated themselves as
A relation to Quasi-Newton methods is given by and , where
The new directions
are then orthogonal to the residuals:
which themselves satisfy
where .
The biconjugate gradient method now makes a special choice and uses the setting
With this particular choice, explicit evaluations of and A−1 are avoided, and the algorithm takes the form stated above.
If is self-adjoint, and , then , , and the conjugate gradient method produces the same sequence at half the computational cost.
The sequences produced by the algorithm are biorthogonal, i.e., for .
if is a polynomial with , then . The algorithm thus produces projections onto the Krylov subspace.
if is a polynomial with , then .
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
This course teaches the students practical skills needed for solving modern physics problems by means of computation. A number of examples illustrate the utility of numerical computations in various d
This course provides an overview of advanced techniques for solving large-scale linear algebra problems, as they typically arise in applications. A central goal of this course is to give the ability t
This course teaches an overview of modern optimization methods, for applications in machine learning and data science. In particular, scalability of algorithms to large datasets will be discussed in t
In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the n-th approximation is derived from the previous ones. A specific implementation with termination criteria for a given iterative method like gradient descent, hill climbing, Newton's method, or quasi-Newton methods like BFGS, is an algorithm of the iterative method.
This work develops novel rational Krylov methods for updating a large-scale matrix function f(A) when A is subject to low-rank modifications. It extends our previous work in this context on polynomial Krylov methods, for which we present a simplified conve ...
SIAM PUBLICATIONS2021
,
We present a multigrid algorithm to solve efficiently the large saddle-point systems of equations that typically arise in PDE-constrained optimization under uncertainty. The algorithm is based on a collective smoother that at each iteration sweeps over the ...
2023
,
We introduce the "continuized" Nesterov acceleration, a close variant of Nesterov acceleration whose variables are indexed by a continuous time parameter. The two variables continuously mix following a linear ordinary differential equation and take gradien ...