Stirling's approximationIn mathematics, Stirling's approximation (or Stirling's formula) is an approximation for factorials. It is a good approximation, leading to accurate results even for small values of . It is named after James Stirling, though a related but less precise result was first stated by Abraham de Moivre. One way of stating the approximation involves the logarithm of the factorial: where the big O notation means that, for all sufficiently large values of , the difference between and will be at most proportional to the logarithm.
Linear approximationIn mathematics, a linear approximation is an approximation of a general function using a linear function (more precisely, an affine function). They are widely used in the method of finite differences to produce first order methods for solving or approximating solutions to equations. Given a twice continuously differentiable function of one real variable, Taylor's theorem for the case states that where is the remainder term.
Continued fractionIn mathematics, a continued fraction is an expression obtained through an iterative process of representing a number as the sum of its integer part and the reciprocal of another number, then writing this other number as the sum of its integer part and another reciprocal, and so on. In a finite continued fraction (or terminated continued fraction), the iteration/recursion is terminated after finitely many steps by using an integer in lieu of another continued fraction. In contrast, an infinite continued fraction is an infinite expression.
Taylor seriesIn mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor series are equal near this point. Taylor series are named after Brook Taylor, who introduced them in 1715. A Taylor series is also called a Maclaurin series when 0 is the point where the derivatives are considered, after Colin Maclaurin, who made extensive use of this special case of Taylor series in the mid-18th century.
Generalized functionIn mathematics, generalized functions are objects extending the notion of functions. There is more than one recognized theory, for example the theory of distributions. Generalized functions are especially useful in making discontinuous functions more like smooth functions, and describing discrete physical phenomena such as point charges. They are applied extensively, especially in physics and engineering. A common feature of some of the approaches is that they build on operator aspects of everyday, numerical functions.
Error functionIn mathematics, the error function (also called the Gauss error function), often denoted by erf, is a complex function of a complex variable defined as: Some authors define without the factor of . This nonelementary integral is a sigmoid function that occurs often in probability, statistics, and partial differential equations. In many of these applications, the function argument is a real number. If the function argument is real, then the function value is also real.
Liouville numberIn number theory, a Liouville number is a real number with the property that, for every positive integer , there exists a pair of integers with such that Liouville numbers are "almost rational", and can thus be approximated "quite closely" by sequences of rational numbers. Precisely, these are transcendental numbers that can be more closely approximated by rational numbers than any algebraic irrational number can be. In 1844, Joseph Liouville showed that all Liouville numbers are transcendental, thus establishing the existence of transcendental numbers for the first time.
Sign (mathematics)In mathematics, the sign of a real number is its property of being either positive, negative, or zero. Depending on local conventions, zero may be considered as being neither positive nor negative (having no sign or a unique third sign), or it may be considered both positive and negative (having both signs). Whenever not specifically mentioned, this article adheres to the first convention. In some contexts, it makes sense to consider a signed zero (such as floating-point representations of real numbers within computers).
Matrix exponentialIn mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group. Let X be an n×n real or complex matrix. The exponential of X, denoted by eX or exp(X), is the n×n matrix given by the power series where is defined to be the identity matrix with the same dimensions as .
Square root of 2The square root of 2 (approximately 1.4142) is a positive real number that, when multiplied by itself, equals the number 2. It may be written in mathematics as or . It is an algebraic number, and therefore not a transcendental number. Technically, it should be called the principal square root of 2, to distinguish it from the negative number with the same property. Geometrically, the square root of 2 is the length of a diagonal across a square with sides of one unit of length; this follows from the Pythagorean theorem.