**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Bilinear interpolation

Summary

In mathematics, bilinear interpolation is a method for interpolating functions of two variables (e.g., x and y) using repeated linear interpolation. It is usually applied to functions sampled on a 2D rectilinear grid, though it can be generalized to functions defined on the vertices of (a mesh of) arbitrary convex quadrilaterals.
Bilinear interpolation is performed using linear interpolation first in one direction, and then again in another direction. Although each step is linear in the sampled values and in the position, the interpolation as a whole is not linear but rather quadratic in the sample location.
Bilinear interpolation is one of the basic resampling techniques in computer vision and , where it is also called bilinear filtering or bilinear texture mapping.
Suppose that we want to find the value of the unknown function f at the point (x, y). It is assumed that we know the value of f at the four points Q11 = (x1, y1), Q12 = (x1, y2), Q21 = (x2, y1), and Q22 = (x2, y2).
We first do linear interpolation in the x-direction. This yields
We proceed by interpolating in the y-direction to obtain the desired estimate:
Note that we will arrive at the same result if the interpolation is done first along the y direction and then along the x direction.
An alternative way is to write the solution to the interpolation problem as a multilinear polynomial
where the coefficients are found by solving the linear system
yielding the result
The solution can also be written as a weighted mean of the f(Q):
where the weights sum to 1 and satisfy the transposed linear system
yielding the result
which simplifies to
in agreement with the result obtained by repeated linear interpolation. The set of weights can also be interpreted as a set of generalized barycentric coordinates for a rectangle.
Combining the above, we have
If we choose a coordinate system in which the four points where f is known are (0, 0), (0, 1), (1, 0), and (1, 1), then the interpolation formula simplifies to
or equivalently, in matrix operations:
Here we also recognize the weights:
Alternatively, the interpolant on the unit square can be written as
where
In both cases, the number of constants (four) correspond to the number of data points where f is given.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications (3)

Related people (2)

Related MOOCs (5)

Introduction to Geographic Information Systems (part 2)

Ce cours constitue la seconde partie d'un enseignement consacré aux bases théoriques et pratiques des systèmes d’information géographique. Il propose une introduction aux systèmes d’information géogra

Introduction to Geographic Information Systems (part 2)

Ce cours constitue la seconde partie d'un enseignement consacré aux bases théoriques et pratiques des systèmes d’information géographique. Il propose une introduction aux systèmes d’information géogra

Geographical Information Systems 2

This course is the second part of a course dedicated to the theoretical and practical bases of Geographic Information Systems (GIS).
It offers an introduction to GIS that does not require prior compu

Related concepts (7)

Nearest-neighbor interpolation (also known as proximal interpolation or, in some contexts, point sampling) is a simple method of multivariate interpolation in one or more dimensions. Interpolation is the problem of approximating the value of a function for a non-given point in some space when given the value of that function in points around (neighboring) that point. The nearest neighbor algorithm selects the value of the nearest point and does not consider the values of neighboring points at all, yielding a piecewise-constant interpolant.

In mathematics, bilinear interpolation is a method for interpolating functions of two variables (e.g., x and y) using repeated linear interpolation. It is usually applied to functions sampled on a 2D rectilinear grid, though it can be generalized to functions defined on the vertices of (a mesh of) arbitrary convex quadrilaterals. Bilinear interpolation is performed using linear interpolation first in one direction, and then again in another direction.

In the mathematical field of numerical analysis, interpolation is a type of estimation, a method of constructing (finding) new data points based on the range of a discrete set of known data points. In engineering and science, one often has a number of data points, obtained by sampling or experimentation, which represent the values of a function for a limited number of values of the independent variable. It is often required to interpolate; that is, estimate the value of that function for an intermediate value of the independent variable.

Related courses (36)

Construction et analyse de méthodes numériques pour la solution de problèmes d'approximation, d'algèbre linéaire et d'analyse

La modélisation numérique des solides est abordée à travers la méthode des éléments finis. Les aspects purement analytiques sont d'abord présentés, puis les moyens d'interpolation, d'intégration et de

L'objectif est de comprendre la méthode des éléments finis i.e. les formulations variationnelles faibles et fortes, l'assemblage des matrices élémentaires, la formulation globale et les schémas de rés

Related lectures (260)

Quentin Christian Becker, Mike Yan Michelis

The underlying geometrical structure of the latent space in deep generative models is in most cases not Euclidean, which may lead to biases when comparing interpolation capabilities of two models. Smoothness and plausibility of linear interpolations in latent space are associated with the quality of the underlying generative model. In this paper, we show that not all such interpolations are comparable as they can deviate arbitrarily from the shortest interpolation curve given by the geodesic. This deviation is revealed by computing curve lengths with the pull-back metric of the generative model, finding shorter curves than the straight line between endpoints, and measuring a non-zero relative length improvement on this straight line. This leads to a strategy to compare linear interpolations across two generative models. We also show the effect and importance of choosing an appropriate output space for computing shorter curves. For this computation we derive an extension of the pull-back metric.

2021Omnidirectional images are the spherical visual signals that provide a wide, 360◦, view of a scene from a specific position. Such images are becoming increasingly popular in fields like virtual reality and robotics. Compared to conventional 2D images, the storage and badwidth requirements of omnidirectional signals are much higher, due to the specific nature of them. Thus, there is a need for image compression schemes to reduce the dedicated storage space of omnidirectional images. Image compression algorithms can be broadly classified into two groups: lossless and lossy. Lossless schemes are able to reconstruct the exact original data but they cannot reduce the size beyond a specific criteria. Lossy methods are generally better solutions if they do not add a high visual distortion to the reconstructed image, as long as they provide a decent compression rate. If a planar, lossy image compression scheme is applied on omnidirectional images, some problems show up. It is possible to apply a planar compression scheme on a projected version of a 360◦image; however, in these projection schemes (such as equirectangular projection) the sampling rate is different in the poles and the center. Consequently, the filters of the planar compression schemes that do not consider this difference ends in suboptimal result and distortions in the reconstructed images. Recently, with the success of deep neural networks in many image processing tasks, researchers began to use them for the image compression as well. In this study, we propose a deep learning-based method for the compression of omnidirectional images by combining some state of the art approaches from the deep learning-based image compression schemes and some special convolutional layers that take into account the geometry of the omnidirectional image. In comparison to the available methods, it is the first method that can be applied directly on the equirectangularly projected version of omnidirectional images and considers the geometry in the scheme and the layers themselves. To propose this method, different geometry-aware convolutional layers have been tried. We exploited various methods of downsampling and upsampling, such as spherical pooling layers, strided or transposed convolutions, bilinear interpolation, and pixel shuffle. In the end, a method is proposed that benefits from specific spherical convolutional layers which contain sampling methods considering the geometry of omnidirectional images. The sampling positions differ in the different heights of the image based on the nature of the projected omnidirectional image. Additionally, as it benefits from an iterative training method that calculates the residual between the output and input and feeds it again to the network as input of the next iteration, it can provide different compression rates with just one pass of training. Finally, it benefits from a novel method of patching that is well-aligned with the spherical convolution layers and helps the method to run efficiently without a need for a high computational power. The model was compared with a similar architecture without spherical convolutions and spherical patching and showed some improvements. The architecture has been optimized and improved and it has the potential to compete with popular image compression schemes such as JPEG especially in terms of reconstructing the colors.

2020Options are some of the most traded financial instruments and computing their price is a central task in financial mathematics and in practice. Consequently, the development of numerical algorithms for pricing options is an active field of research. In general, evaluating the price of a specific option relies on the properties of the stochastic model used for the underlying asset price. In this thesis we develop efficient and accurate numerical methods for option pricing in a specific class of models: polynomial models. They are a versatile tool for financial modeling and have useful properties that can be exploited for option pricing.
Significant challenges arise when developing option pricing techniques. For instance, the underlying model might have a high-dimensional parameter space. Furthermore, treating multi-asset options yields high-dimensional pricing problems. Therefore, the pricing method should be able to handle high dimensionality. Another important aspect is the efficiency of the algorithm: in real-world applications, option prices need to be delivered within short periods of time, making the algorithmic complexity a potential bottleneck. In this thesis, we address these challenges by developing option pricing techniques that are able to handle low and high-dimensional problems, and we propose complexity reduction techniques.
The thesis consists of four parts:
First, we present a methodology for European and American option pricing. The method uses the moments of the underlying price process to produce monotone sequences of lower and upper bounds of the option price. The bounds are obtained by solving a sequence of polynomial optimization problems. As the order of the moments increases, the bounds become sharper and eventually converge to the exact price under appropriate assumptions.
Second, we develop a fast algorithm for the incremental computation of nested block triangular matrix exponentials. This algorithm allows for an efficient incremental computation of the moment sequence of polynomial jump-diffusions. In other words, moments of order 0, 1, 2, 3... are computed sequentially until a dynamically evaluated criterion tells us to stop. The algorithm is based on the scaling and squaring technique and reduces the complexity of the pricing algorithms that require such an incremental moment computation.
Third, we develop a complexity reduction technique for high-dimensional option pricing. To this end, we first consider the option price as a function of model and payoff parameters. Then, the tensorized Chebyshev interpolation is used on the parameter space to increase the efficiency in computing option prices, while maintaining the required accuracy. The high dimensionality of the problem is treated by expressing the tensorized interpolation in the tensor train format and by deriving an efficient way, which is based on tensor completion, to approximate the interpolation coefficients.
Lastly, we propose a methodology for pricing single and multi-asset European options. The approach is a combination of Monte Carlo simulation and function approximation. We address the memory limitations that arise when treating very high-dimensional applications by combining the method with optimal sampling strategies and using a randomized algorithm to reduce the storage complexity of the approach.
The obtained numerical results show the effectiveness of the algorithms developed in this thesis.

Image manipulations: Affine transforms and InterpolationMOOC: Digital Signal Processing I

Covers affine transforms, rotation, shear, and bilinear interpolation for image manipulations.

Trigonometric Interpolation: Approximation of Periodic Functions and Signals

Explores trigonometric interpolation for approximating periodic functions and signals using equally spaced nodes.