Summary
In mathematics, more specifically in numerical linear algebra, the biconjugate gradient method is an algorithm to solve systems of linear equations Unlike the conjugate gradient method, this algorithm does not require the matrix to be self-adjoint, but instead one needs to perform multiplications by the conjugate transpose A*. Choose initial guess , two other vectors and and a preconditioner for do In the above formulation, the computed and satisfy and thus are the respective residuals corresponding to and , as approximate solutions to the systems is the adjoint, and is the complex conjugate. Choose initial guess , for do The biconjugate gradient method is numerically unstable (compare to the biconjugate gradient stabilized method), but very important from a theoretical point of view. Define the iteration steps by where using the related projection with These related projections may be iterated themselves as A relation to Quasi-Newton methods is given by and , where The new directions are then orthogonal to the residuals: which themselves satisfy where . The biconjugate gradient method now makes a special choice and uses the setting With this particular choice, explicit evaluations of and A−1 are avoided, and the algorithm takes the form stated above. If is self-adjoint, and , then , , and the conjugate gradient method produces the same sequence at half the computational cost. The sequences produced by the algorithm are biorthogonal, i.e., for . if is a polynomial with , then . The algorithm thus produces projections onto the Krylov subspace. if is a polynomial with , then .
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.