**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Polynomial interpolation

Summary

In numerical analysis, polynomial interpolation is the interpolation of a given bivariate data set by the polynomial of lowest possible degree that passes through the points of the dataset.
Given a set of n + 1 data points , with no two the same, a polynomial function is said to interpolate the data if for each .
There is always a unique such polynomial, commonly given by two explicit formulas, the Lagrange polynomials and Newton polynomials.
The original use of interpolation polynomials was to approximate values of important transcendental functions such as natural logarithm and trigonometric functions. Starting with a few accurately computed data points, the corresponding interpolation polynomial will approximate the function at an arbitrary nearby point. Polynomial interpolation also forms the basis for algorithms in numerical quadrature (Simpson's rule) and numerical ordinary differential equations (multigrid methods).
In computer graphics, polynomials can be used to approximate complicated plane curves given a few specified points, for example the shapes of letters in typography. This is usually done with Bézier curves, which are a simple generalization of interpolation polynomials (having specified tangents as well as specified points).
In numerical analysis, polynomial interpolation is essential to perform sub-quadratic multiplication and squaring, such as Karatsuba multiplication and Toom–Cook multiplication, where interpolation through points on a product polynomial yields the specific product required. For example, given a = f(x) = a0x0 + a1x1 + ··· and b = g(x) = b0x0 + b1x1 + ···, the product ab is a specific value of W(x) = f(x)g(x). One may easily find points along W(x) at small values of x, and interpolation based on those points will yield the terms of W(x) and the specific product ab. As fomulated in Karatsuba multiplication, this technique is substantially faster than quadratic multiplication, even for modest-sized inputs, especially on parallel hardware.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications (3)

Related people (3)

Related concepts (19)

Related courses (45)

Related MOOCs (11)

Related units (2)

Related lectures (397)

MATH-468: Numerics for fluids, structures & electromagnetics

Cours donné en alternance tous les deux ans

MATH-250: Numerical analysis

Construction et analyse de méthodes numériques pour la solution de problèmes d'approximation, d'algèbre linéaire et d'analyse

MSE-369: Finite element theory

L'objectif est de comprendre la méthode des éléments finis i.e. les formulations variationnelles faibles et fortes, l'assemblage des matrices élémentaires, la formulation globale et les schémas de rés

Digital Signal Processing I

Basic signal processing concepts, Fourier analysis and filters. This module can
be used as a starting point or a basic refresher in elementary DSP

Digital Signal Processing II

Adaptive signal processing, A/D and D/A. This module provides the basic
tools for adaptive filtering and a solid mathematical framework for sampling and
quantization

Digital Signal Processing III

Advanced topics: this module covers real-time audio processing (with
examples on a hardware board), image processing and communication system design.

Lagrange polynomial

In numerical analysis, the Lagrange interpolating polynomial is the unique polynomial of lowest degree that interpolates a given set of data. Given a data set of coordinate pairs with the are called nodes and the are called values. The Lagrange polynomial has degree and assumes each value at the corresponding node, Although named after Joseph-Louis Lagrange, who published it in 1795, the method was first discovered in 1779 by Edward Waring. It is also an easy consequence of a formula published in 1783 by Leonhard Euler.

Interpolation

In the mathematical field of numerical analysis, interpolation is a type of estimation, a method of constructing (finding) new data points based on the range of a discrete set of known data points. In engineering and science, one often has a number of data points, obtained by sampling or experimentation, which represent the values of a function for a limited number of values of the independent variable. It is often required to interpolate; that is, estimate the value of that function for an intermediate value of the independent variable.

Polynomial interpolation

In numerical analysis, polynomial interpolation is the interpolation of a given bivariate data set by the polynomial of lowest possible degree that passes through the points of the dataset. Given a set of n + 1 data points , with no two the same, a polynomial function is said to interpolate the data if for each . There is always a unique such polynomial, commonly given by two explicit formulas, the Lagrange polynomials and Newton polynomials.

The Wrong Method: Polynomial InterpolationMOOC: Numerical Analysis for Engineers

Explores the inefficiencies of the incorrect method for polynomial interpolation.

Error Analysis and InterpolationMATH-251(a): Numerical analysis

Explores error analysis and limitations in interpolation on evenly distributed nodes.

Trigonometric Interpolation: Approximation of Periodic Functions and Signals

Explores trigonometric interpolation for approximating periodic functions and signals using equally spaced nodes.

We analyze an expansion of the generalized block Krylov subspace framework of [Electron.\ Trans.\ Numer.\ Anal., 47 (2017), pp. 100-126]. This expansion allows the use of low-rank modifications of the matrix projected onto the block Krylov subspace and contains, as special cases, the block GMRES method and the new block Radau-Arnoldi method. Within this general setting, we present results that extend the interpolation property from the non-block case to a matrix polynomial interpolation property for the block case, and we relate the eigenvalues of the projected matrix to the latent roots of these matrix polynomials. Some convergence results for these modified block FOM methods for solving linear system are presented. We then show how {\em cospatial} residuals can be preserved in the case of families of shifted linear block systems. This result is used to derive computationally practical restarted algorithms for block Krylov approximations that compute the action of a matrix function on a set of several vectors simultaneously. We prove some convergence results and present numerical results showing that two modifications of FOM, the block harmonic and the block Radau-Arnoldi methods for matrix functions, can significantly improve the convergence behavior.

Options are some of the most traded financial instruments and computing their price is a central task in financial mathematics and in practice. Consequently, the development of numerical algorithms for pricing options is an active field of research. In general, evaluating the price of a specific option relies on the properties of the stochastic model used for the underlying asset price. In this thesis we develop efficient and accurate numerical methods for option pricing in a specific class of models: polynomial models. They are a versatile tool for financial modeling and have useful properties that can be exploited for option pricing.
Significant challenges arise when developing option pricing techniques. For instance, the underlying model might have a high-dimensional parameter space. Furthermore, treating multi-asset options yields high-dimensional pricing problems. Therefore, the pricing method should be able to handle high dimensionality. Another important aspect is the efficiency of the algorithm: in real-world applications, option prices need to be delivered within short periods of time, making the algorithmic complexity a potential bottleneck. In this thesis, we address these challenges by developing option pricing techniques that are able to handle low and high-dimensional problems, and we propose complexity reduction techniques.
The thesis consists of four parts:
First, we present a methodology for European and American option pricing. The method uses the moments of the underlying price process to produce monotone sequences of lower and upper bounds of the option price. The bounds are obtained by solving a sequence of polynomial optimization problems. As the order of the moments increases, the bounds become sharper and eventually converge to the exact price under appropriate assumptions.
Second, we develop a fast algorithm for the incremental computation of nested block triangular matrix exponentials. This algorithm allows for an efficient incremental computation of the moment sequence of polynomial jump-diffusions. In other words, moments of order 0, 1, 2, 3... are computed sequentially until a dynamically evaluated criterion tells us to stop. The algorithm is based on the scaling and squaring technique and reduces the complexity of the pricing algorithms that require such an incremental moment computation.
Third, we develop a complexity reduction technique for high-dimensional option pricing. To this end, we first consider the option price as a function of model and payoff parameters. Then, the tensorized Chebyshev interpolation is used on the parameter space to increase the efficiency in computing option prices, while maintaining the required accuracy. The high dimensionality of the problem is treated by expressing the tensorized interpolation in the tensor train format and by deriving an efficient way, which is based on tensor completion, to approximate the interpolation coefficients.
Lastly, we propose a methodology for pricing single and multi-asset European options. The approach is a combination of Monte Carlo simulation and function approximation. We address the memory limitations that arise when treating very high-dimensional applications by combining the method with optimal sampling strategies and using a randomized algorithm to reduce the storage complexity of the approach.
The obtained numerical results show the effectiveness of the algorithms developed in this thesis.

The aim of this project is the analysis of different approaches for curve and surface reconstruction and the development of a related MATLAB library. We initially study the Lagrange polynomial interpolation, then the Bezier, the B-splines and the NURBS curves and finally we introduce the NURBS surfaces. At each step we list the useful properties of the different approaches, the details of the mathematical representation and the evolution from simpler to more complicated representations.

2008