In numerical analysis, the Shanks transformation is a non-linear series acceleration method to increase the rate of convergence of a sequence. This method is named after Daniel Shanks, who rediscovered this sequence transformation in 1955. It was first derived and published by R. Schmidt in 1941.
For a sequence the series
is to be determined. First, the partial sum is defined as:
and forms a new sequence . Provided the series converges, will also approach the limit as
The Shanks transformation of the sequence is the new sequence defined by
where this sequence often converges more rapidly than the sequence
Further speed-up may be obtained by repeated use of the Shanks transformation, by computing etc.
Note that the non-linear transformation as used in the Shanks transformation is essentially the same as used in Aitken's delta-squared process so that as with Aitken's method, the right-most expression in 's definition (i.e. ) is more numerically stable than the expression to its left (i.e. ). Both Aitken's method and the Shanks transformation operate on a sequence, but the sequence the Shanks transformation operates on is usually thought of as being a sequence of partial sums, although any sequence may be viewed as a sequence of partial sums.
As an example, consider the slowly convergent series
which has the exact sum π ≈ 3.14159265. The partial sum has only one digit accuracy, while six-figure accuracy requires summing about 400,000 terms.
In the table below, the partial sums , the Shanks transformation on them, as well as the repeated Shanks transformations and are given for up to 12. The figure to the right shows the absolute error for the partial sums and Shanks transformation results, clearly showing the improved accuracy and convergence rate.
The Shanks transformation already has two-digit accuracy, while the original partial sums only establish the same accuracy at Remarkably, has six digits accuracy, obtained from repeated Shank transformations applied to the first seven terms As said before, only obtains 6-digit accuracy after about summing 400,000 terms.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
In numerical analysis, Richardson extrapolation is a sequence acceleration method used to improve the rate of convergence of a sequence of estimates of some value . In essence, given the value of for several values of , we can estimate by extrapolating the estimates to . It is named after Lewis Fry Richardson, who introduced the technique in the early 20th century, though the idea was already known to Christiaan Huygens in his calculation of π. In the words of Birkhoff and Rota, "its usefulness for practical computations can hardly be overestimated.
In numerical analysis, the order of convergence and the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit. A sequence that converges to is said to have order of convergence and rate of convergence if The rate of convergence is also called the asymptotic error constant. Note that this terminology is not standardized and some authors will use rate where this article uses order (e.g., ).
This article considers solving an overdetermined system of linear equations in peer-to-peer multiagent networks. The network is assumed to be synchronous and strongly connected. Each agent has a set of local data points, and their goal is to compute a line ...
A spectral domain analysis is presented of a plane wave-excited subwavelength circular aperture in a planar perfectly conducting screen in close proximity to a multilayer stack, which may comprise uniaxially anisotropic slices. The formulation employs the ...
Institute of Electrical and Electronics Engineers2015
This article studies a class of nonsmooth decentralized multiagent optimization problems where the agents aim at minimizing a sum of local strongly-convex smooth components plus a common nonsmooth term. We propose a general primal-dual algorithmic framewor ...