In mathematics, a sequence transformation is an operator acting on a given space of sequences (a sequence space). Sequence transformations include linear mappings such as convolution with another sequence, and resummation of a sequence and, more generally, are commonly used for series acceleration, that is, for improving the rate of convergence of a slowly convergent sequence or series. Sequence transformations are also commonly used to compute the antilimit of a divergent series numerically, and are used in conjunction with extrapolation methods.
Classical examples for sequence transformations include the binomial transform, Möbius transform, Stirling transform and others.
For a given sequence
the transformed sequence is
where the members of the transformed sequence are usually computed from some finite number of members of the original sequence, i.e.
for some which often depends on (cf. e.g. Binomial transform). In the simplest case, the and the are real or complex numbers. More generally, they may be elements of some vector space or algebra.
In the context of acceleration of convergence, the transformed sequence is said to converge faster than the original sequence if
where is the limit of , assumed to be convergent. In this case, convergence acceleration is obtained. If the original sequence is divergent, the sequence transformation acts as extrapolation method to the antilimit .
If the mapping is linear in each of its arguments, i.e., for
for some constants (which may depend on n), the sequence transformation is called a linear sequence transformation. Sequence transformations that are not linear are called nonlinear sequence transformations.
Simplest examples of (linear) sequence transformations include shifting all elements, (resp. = 0 if n + k < 0) for a fixed k, and scalar multiplication of the sequence.
A less trivial example would be the discrete convolution with a fixed sequence. A particularly basic form is the difference operator, which is convolution with the sequence and is a discrete analog of the derivative.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
In numerical analysis, Richardson extrapolation is a sequence acceleration method used to improve the rate of convergence of a sequence of estimates of some value . In essence, given the value of for several values of , we can estimate by extrapolating the estimates to . It is named after Lewis Fry Richardson, who introduced the technique in the early 20th century, though the idea was already known to Christiaan Huygens in his calculation of π. In the words of Birkhoff and Rota, "its usefulness for practical computations can hardly be overestimated.
In mathematics, series acceleration is one of a collection of sequence transformations for improving the rate of convergence of a series. Techniques for series acceleration are often applied in numerical analysis, where they are used to improve the speed of numerical integration. Series acceleration techniques may also be used, for example, to obtain a variety of identities on special functions. Thus, the Euler transform applied to the hypergeometric series gives some of the classic, well-known hypergeometric series identities.
In numerical analysis, Aitken's delta-squared process or Aitken extrapolation is a series acceleration method, used for accelerating the rate of convergence of a sequence. It is named after Alexander Aitken, who introduced this method in 1926. Its early form was known to Seki Kōwa (end of 17th century) and was found for rectification of the circle, i.e. the calculation of π. It is most useful for accelerating the convergence of a sequence that is converging linearly.
L'objectif de ce cours est d'étudier les différentes manifestations des mondes totalitaires dans la fiction. Plus précisément, nous regarderons comment les écrivains racontent l'aliénation de l'homme
We study the fundamental concepts of analysis, calculus and the integral of real-valued functions of a real variable.
Distributed learning is the key for enabling training of modern large-scale machine learning models, through parallelising the learning process. Collaborative learning is essential for learning from privacy-sensitive data that is distributed across various ...
We propose a stochastic conditional gradient method (CGM) for minimizing convex finitesum objectives formed as a sum of smooth and non-smooth terms. Existing CGM variants for this template either suffer from slow convergence rates, or require carefully inc ...
Second-order Moller-Plesset perturbation theory (MP2) is the most expedient wave function-based method for considering electron correlation in quantum chemical calculations and, as such, provides a cost-effective framework to assess the effects of basis se ...