Stieltjes moment problemIn mathematics, the Stieltjes moment problem, named after Thomas Joannes Stieltjes, seeks necessary and sufficient conditions for a sequence (m0, m1, m2, ...) to be of the form for some measure μ. If such a function μ exists, one asks whether it is unique. The essential difference between this and other well-known moment problems is that this is on a half-line [0, ∞), whereas in the Hausdorff moment problem one considers a bounded interval [0, 1], and in the Hamburger moment problem one considers the whole line (−∞, ∞).
Weak solutionIn mathematics, a weak solution (also called a generalized solution) to an ordinary or partial differential equation is a function for which the derivatives may not all exist but which is nonetheless deemed to satisfy the equation in some precisely defined sense. There are many different definitions of weak solution, appropriate for different classes of equations. One of the most important is based on the notion of distributions.
Hausdorff moment problemIn mathematics, the Hausdorff moment problem, named after Felix Hausdorff, asks for necessary and sufficient conditions that a given sequence (m0, m1, m2, ...) be the sequence of moments of some Borel measure μ supported on the closed unit interval [0, 1]. In the case m0 = 1, this is equivalent to the existence of a random variable X supported on [0, 1], such that E[X^n] = mn. The essential difference between this and other well-known moment problems is that this is on a bounded interval, whereas in the Stieltjes moment problem one considers a half-line [0, ∞), and in the Hamburger moment problem one considers the whole line (−∞, ∞).
Interval (mathematics)In mathematics, a (real) interval is a set of real numbers that contains all real numbers lying between any two numbers of the set. For example, the set of numbers x satisfying 0 ≤ x ≤ 1 is an interval which contains 0, 1, and all numbers in between. Other examples of intervals are the set of numbers such that 0 < x < 1, the set of all real numbers , the set of nonnegative real numbers, the set of positive real numbers, the empty set, and any singleton (set of one element).
Girsanov theoremIn probability theory, the Girsanov theorem tells how stochastic processes change under changes in measure. The theorem is especially important in the theory of financial mathematics as it tells how to convert from the physical measure which describes the probability that an underlying instrument (such as a share price or interest rate) will take a particular value or values to the risk-neutral measure which is a very useful tool for evaluating the value of derivatives on the underlying.
Donsker's theoremIn probability theory, Donsker's theorem (also known as Donsker's invariance principle, or the functional central limit theorem), named after Monroe D. Donsker, is a functional extension of the central limit theorem. Let be a sequence of independent and identically distributed (i.i.d.) random variables with mean 0 and variance 1. Let . The stochastic process is known as a random walk. Define the diffusively rescaled random walk (partial-sum process) by The central limit theorem asserts that converges in distribution to a standard Gaussian random variable as .
DelDel, or nabla, is an operator used in mathematics (particularly in vector calculus) as a vector differential operator, usually represented by the nabla symbol ∇. When applied to a function defined on a one-dimensional domain, it denotes the standard derivative of the function as defined in calculus. When applied to a field (a function defined on a multi-dimensional domain), it may denote any one of three operations depending on the way it is applied: the gradient or (locally) steepest slope of a scalar field (or sometimes of a vector field, as in the Navier–Stokes equations); the divergence of a vector field; or the curl (rotation) of a vector field.
Campbell's theorem (probability)In probability theory and statistics, Campbell's theorem or the Campbell–Hardy theorem is either a particular equation or set of results relating to the expectation of a function summed over a point process to an integral involving the mean measure of the point process, which allows for the calculation of expected value and variance of the random sum. One version of the theorem, also known as Campbell's formula, entails an integral equation for the aforementioned sum over a general point process, and not necessarily a Poisson point process.
Correlation functionA correlation function is a function that gives the statistical correlation between random variables, contingent on the spatial or temporal distance between those variables. If one considers the correlation function between random variables representing the same quantity measured at two different points, then this is often referred to as an autocorrelation function, which is made up of autocorrelations.
Riesz potentialIn mathematics, the Riesz potential is a potential named after its discoverer, the Hungarian mathematician Marcel Riesz. In a sense, the Riesz potential defines an inverse for a power of the Laplace operator on Euclidean space. They generalize to several variables the Riemann–Liouville integrals of one variable. If 0 < α < n, then the Riesz potential Iαf of a locally integrable function f on Rn is the function defined by where the constant is given by This singular integral is well-defined provided f decays sufficiently rapidly at infinity, specifically if f ∈ Lp(Rn) with 1 ≤ p < n/α.