Multivariate interpolationIn numerical analysis, multivariate interpolation is interpolation on functions of more than one variable (multivariate functions); when the variates are spatial coordinates, it is also known as spatial interpolation. The function to be interpolated is known at given points and the interpolation problem consists of yielding values at arbitrary points . Multivariate interpolation is particularly important in geostatistics, where it is used to create a digital elevation model from a set of points on the Earth's surface (for example, spot heights in a topographic survey or depths in a hydrographic survey).
Karatsuba algorithmThe Karatsuba algorithm is a fast multiplication algorithm. It was discovered by Anatoly Karatsuba in 1960 and published in 1962. It is a divide-and-conquer algorithm that reduces the multiplication of two n-digit numbers to three multiplications of n/2-digit numbers and, by repeating this reduction, to at most single-digit multiplications. It is therefore asymptotically faster than the traditional algorithm, which performs single-digit products.
Analytic signalIn mathematics and signal processing, an analytic signal is a complex-valued function that has no negative frequency components. The real and imaginary parts of an analytic signal are real-valued functions related to each other by the Hilbert transform. The analytic representation of a real-valued function is an analytic signal, comprising the original function and its Hilbert transform. This representation facilitates many mathematical manipulations.
Spline interpolationIn the mathematical field of numerical analysis, spline interpolation is a form of interpolation where the interpolant is a special type of piecewise polynomial called a spline. That is, instead of fitting a single, high-degree polynomial to all of the values at once, spline interpolation fits low-degree polynomials to small subsets of the values, for example, fitting nine cubic polynomials between each of the pairs of ten points, instead of fitting a single degree-ten polynomial to all of them.
Functional square rootIn mathematics, a functional square root (sometimes called a half iterate) is a square root of a function with respect to the operation of function composition. In other words, a functional square root of a function g is a function f satisfying f(f(x)) = g(x) for all x. Notations expressing that f is a functional square root of g are f = g[1/2] and f = g1/2. The functional square root of the exponential function (now known as a half-exponential function) was studied by Hellmuth Kneser in 1950.
Iterated functionIn mathematics, an iterated function is a function X → X (that is, a function from some set X to itself) which is obtained by composing another function f : X → X with itself a certain number of times. The process of repeatedly applying the same function is called iteration. In this process, starting from some initial object, the result of applying a given function is fed again in the function as input, and this process is repeated. For example on the image on the right: with the circle‐shaped symbol of function composition.
Multiple integralIn mathematics (specifically multivariable calculus), a multiple integral is a definite integral of a function of several real variables, for instance, f(x, y) or f(x, y, z). Integrals of a function of two variables over a region in (the real-number plane) are called double integrals, and integrals of a function of three variables over a region in (real-number 3D space) are called triple integrals. For multiple integrals of a single-variable function, see the Cauchy formula for repeated integration.
Fractional calculusFractional calculus is a branch of mathematical analysis that studies the several different possibilities of defining real number powers or complex number powers of the differentiation operator and of the integration operator and developing a calculus for such operators generalizing the classical one.
Orthogonal functionsIn mathematics, orthogonal functions belong to a function space that is a vector space equipped with a bilinear form. When the function space has an interval as the domain, the bilinear form may be the integral of the product of functions over the interval: The functions and are orthogonal when this integral is zero, i.e. whenever . As with a basis of vectors in a finite-dimensional space, orthogonal functions can form an infinite basis for a function space.
SmoothnessIn mathematical analysis, the smoothness of a function is a property measured by the number of continuous derivatives it has over some domain, called differentiability class. At the very minimum, a function could be considered smooth if it is differentiable everywhere (hence continuous). At the other end, it might also possess derivatives of all orders in its domain, in which case it is said to be infinitely differentiable and referred to as a C-infinity function (or function).