Eigendecomposition of a matrixIn linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem. Eigenvalue, eigenvector and eigenspace A (nonzero) vector v of dimension N is an eigenvector of a square N × N matrix A if it satisfies a linear equation of the form for some scalar λ.
Minimal polynomial (linear algebra)In linear algebra, the minimal polynomial μA of an n × n matrix A over a field F is the monic polynomial P over F of least degree such that P(A) = 0. Any other polynomial Q with Q(A) = 0 is a (polynomial) multiple of μA. The following three statements are equivalent: λ is a root of μA, λ is a root of the characteristic polynomial χA of A, λ is an eigenvalue of matrix A. The multiplicity of a root λ of μA is the largest power m such that ker((A − λIn)m) strictly contains ker((A − λIn)m−1).
Skew-symmetric matrixIn mathematics, particularly in linear algebra, a skew-symmetric (or antisymmetric or antimetric) matrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition In terms of the entries of the matrix, if denotes the entry in the -th row and -th column, then the skew-symmetric condition is equivalent to The matrix is skew-symmetric because Throughout, we assume that all matrix entries belong to a field whose characteristic is not equal to 2.
Pauli matricesIn mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices which are Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma (σ), they are occasionally denoted by tau (τ) when used in connection with isospin symmetries. These matrices are named after the physicist Wolfgang Pauli. In quantum mechanics, they occur in the Pauli equation which takes into account the interaction of the spin of a particle with an external electromagnetic field.
Axis–angle representationIn mathematics, the axis–angle representation parameterizes a rotation in a three-dimensional Euclidean space by two quantities: a unit vector e indicating the direction (geometry) of an axis of rotation, and an angle of rotation θ describing the magnitude and sense (e.g., clockwise) of the rotation about the axis. Only two numbers, not three, are needed to define the direction of a unit vector e rooted at the origin because the magnitude of e is constrained.
Ordinary differential equationIn mathematics, an ordinary differential equation (ODE) is a differential equation (DE) dependent on only a single independent variable. As with other DE, its unknown(s) consists of one (or more) function(s) and involves the derivatives of those functions. The term "ordinary" is used in contrast with partial differential equations which may be with respect to one independent variable. A linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form where a_0(x), .
Linear differential equationIn mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form where a0(x), ..., an(x) and b(x) are arbitrary differentiable functions that do not need to be linear, and y′, ..., y(n) are the successive derivatives of an unknown function y of the variable x. Such an equation is an ordinary differential equation (ODE).
Analytic function of a matrixIn mathematics, every analytic function can be used for defining a matrix function that maps square matrices with complex entries to square matrices of the same size. This is used for defining the exponential of a matrix, which is involved in the closed-form solution of systems of linear differential equations. There are several techniques for lifting a real function to a square matrix function such that interesting properties are maintained. All of the following techniques yield the same matrix function, but the domains on which the function is defined may differ.
Magnus expansionIn mathematics and physics, the Magnus expansion, named after Wilhelm Magnus (1907–1990), provides an exponential representation of the solution of a first-order homogeneous linear differential equation for a linear operator. In particular, it furnishes the fundamental matrix of a system of linear ordinary differential equations of order n with varying coefficients. The exponent is aggregated as an infinite series, whose terms involve multiple integrals and nested commutators.
Derivative of the exponential mapIn the theory of Lie groups, the exponential map is a map from the Lie algebra g of a Lie group G into G. In case G is a matrix Lie group, the exponential map reduces to the matrix exponential. The exponential map, denoted exp:g → G, is analytic and has as such a derivative d/dtexp(X(t)):Tg → TG, where X(t) is a C1 path in the Lie algebra, and a closely related differential dexp:Tg → TG. The formula for dexp was first proved by Friedrich Schur (1891).