Jacobi polynomialsIn mathematics, Jacobi polynomials (occasionally called hypergeometric polynomials) are a class of classical orthogonal polynomials. They are orthogonal with respect to the weight on the interval . The Gegenbauer polynomials, and thus also the Legendre, Zernike and Chebyshev polynomials, are special cases of the Jacobi polynomials. The Jacobi polynomials were introduced by Carl Gustav Jacob Jacobi. The Jacobi polynomials are defined via the hypergeometric function as follows: where is Pochhammer's symbol (for the rising factorial).
Chebyshev polynomialsThe Chebyshev polynomials are two sequences of polynomials related to the cosine and sine functions, notated as and . They can be defined in several equivalent ways, one of which starts with trigonometric functions: The Chebyshev polynomials of the first kind are defined by Similarly, the Chebyshev polynomials of the second kind are defined by That these expressions define polynomials in may not be obvious at first sight, but follows by rewriting and using de Moivre's formula or by using the angle sum formulas for and repeatedly.
Numerical methods for ordinary differential equationsNumerical methods for ordinary differential equations are methods used to find numerical approximations to the solutions of ordinary differential equations (ODEs). Their use is also known as "numerical integration", although this term can also refer to the computation of integrals. Many differential equations cannot be solved exactly. For practical purposes, however – such as in engineering – a numeric approximation to the solution is often sufficient. The algorithms studied here can be used to compute such an approximation.
Hermite polynomialsIn mathematics, the Hermite polynomials are a classical orthogonal polynomial sequence. The polynomials arise in: signal processing as Hermitian wavelets for wavelet transform analysis probability, such as the Edgeworth series, as well as in connection with Brownian motion; combinatorics, as an example of an Appell sequence, obeying the umbral calculus; numerical analysis as Gaussian quadrature; physics, where they give rise to the eigenstates of the quantum harmonic oscillator; and they also occur in some cases of the heat equation (when the term is present); systems theory in connection with nonlinear operations on Gaussian noise.
Computational scienceComputational science, also known as scientific computing, technical computing or scientific computation (SC), is a division of science that uses advanced computing capabilities to understand and solve complex physical problems. This includes Algorithms (numerical and non-numerical): mathematical models, computational models, and computer simulations developed to solve sciences (e.
Power iterationIn mathematics, power iteration (also known as the power method) is an eigenvalue algorithm: given a diagonalizable matrix , the algorithm will produce a number , which is the greatest (in absolute value) eigenvalue of , and a nonzero vector , which is a corresponding eigenvector of , that is, . The algorithm is also known as the Von Mises iteration. Power iteration is a very simple algorithm, but it may converge slowly.
Homogeneous differential equationA differential equation can be homogeneous in either of two respects. A first order differential equation is said to be homogeneous if it may be written where f and g are homogeneous functions of the same degree of x and y. In this case, the change of variable y = ux leads to an equation of the form which is easy to solve by integration of the two members. Otherwise, a differential equation is homogeneous if it is a homogeneous function of the unknown function and its derivatives.
Computational statisticsComputational statistics, or statistical computing, is the bond between statistics and computer science. It means statistical methods that are enabled by using computational methods. It is the area of computational science (or scientific computing) specific to the mathematical science of statistics. This area is also developing rapidly, leading to calls that a broader concept of computing should be taught as part of general statistical education.
Eigendecomposition of a matrixIn linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem. Eigenvalue, eigenvector and eigenspace A (nonzero) vector v of dimension N is an eigenvector of a square N × N matrix A if it satisfies a linear equation of the form for some scalar λ.
Lanczos algorithmThe Lanczos algorithm is an iterative method devised by Cornelius Lanczos that is an adaptation of power methods to find the "most useful" (tending towards extreme highest/lowest) eigenvalues and eigenvectors of an Hermitian matrix, where is often but not necessarily much smaller than . Although computationally efficient in principle, the method as initially formulated was not useful, due to its numerical instability. In 1970, Ojalvo and Newman showed how to make the method numerically stable and applied it to the solution of very large engineering structures subjected to dynamic loading.