Bernoulli polynomialsIn mathematics, the Bernoulli polynomials, named after Jacob Bernoulli, combine the Bernoulli numbers and binomial coefficients. They are used for series expansion of functions, and with the Euler–MacLaurin formula. These polynomials occur in the study of many special functions and, in particular, the Riemann zeta function and the Hurwitz zeta function. They are an Appell sequence (i.e. a Sheffer sequence for the ordinary derivative operator). For the Bernoulli polynomials, the number of crossings of the x-axis in the unit interval does not go up with the degree.
Eigendecomposition of a matrixIn linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem. Eigenvalue, eigenvector and eigenspace A (nonzero) vector v of dimension N is an eigenvector of a square N × N matrix A if it satisfies a linear equation of the form for some scalar λ.
Vandermonde matrixIn linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row: an matrix with entries , the jth power of the number , for all zero-based indices and . Most authors define the Vandermonde matrix as the transpose of the above matrix. The determinant of a square Vandermonde matrix (when ) is called a Vandermonde determinant or Vandermonde polynomial. Its value is: This is non-zero if and only if all are distinct (no two are equal), making the Vandermonde matrix invertible.
Matrix splittingIn the mathematical discipline of numerical linear algebra, a matrix splitting is an expression which represents a given matrix as a sum or difference of matrices. Many iterative methods (for example, for systems of differential equations) depend upon the direct solution of matrix equations involving matrices more general than tridiagonal matrices. These matrix equations can often be solved directly and efficiently when written as a matrix splitting. The technique was devised by Richard S. Varga in 1960.
Matrix calculusIn mathematics, matrix calculus is a specialized notation for doing multivariable calculus, especially over spaces of matrices. It collects the various partial derivatives of a single function with respect to many variables, and/or of a multivariate function with respect to a single variable, into vectors and matrices that can be treated as single entities. This greatly simplifies operations such as finding the maximum or minimum of a multivariate function and solving systems of differential equations.
TeleparallelismTeleparallelism (also called teleparallel gravity), was an attempt by Albert Einstein to base a unified theory of electromagnetism and gravity on the mathematical structure of distant parallelism, also referred to as absolute or teleparallelism. In this theory, a spacetime is characterized by a curvature-free linear connection in conjunction with a metric tensor field, both defined in terms of a dynamical tetrad field. The crucial new idea, for Einstein, was the introduction of a tetrad field, i.e.
Constant-recursive sequenceIn mathematics and theoretical computer science, a constant-recursive sequence is an infinite sequence of numbers where each number in the sequence is equal to a fixed linear combination of one or more of its immediate predecessors. A constant-recursive sequence is also known as a linear recurrence sequence, linear-recursive sequence, linear-recurrent sequence, a C-finite sequence, or a solution to a linear recurrence with constant coefficients.