Direct limitIn mathematics, a direct limit is a way to construct a (typically large) object from many (typically smaller) objects that are put together in a specific way. These objects may be groups, rings, vector spaces or in general objects from any . The way they are put together is specified by a system of homomorphisms (group homomorphism, ring homomorphism, or in general morphisms in the category) between those smaller objects. The direct limit of the objects , where ranges over some directed set , is denoted by .
Algebraic extensionIn mathematics, an algebraic extension is a field extension L/K such that every element of the larger field L is algebraic over the smaller field K; that is, every element of L is a root of a non-zero polynomial with coefficients in K. A field extension that is not algebraic, is said to be transcendental, and must contain transcendental elements, that is, elements that are not algebraic. The algebraic extensions of the field of the rational numbers are called algebraic number fields and are the main objects of study of algebraic number theory.
Square numberIn mathematics, a square number or perfect square is an integer that is the square of an integer; in other words, it is the product of some integer with itself. For example, 9 is a square number, since it equals 32 and can be written as 3 × 3. The usual notation for the square of a number n is not the product n × n, but the equivalent exponentiation n2, usually pronounced as "n squared". The name square number comes from the name of the shape. The unit of area is defined as the area of a unit square (1 × 1).
Matrix multiplication algorithmBecause matrix multiplication is such a central operation in many numerical algorithms, much work has been invested in making matrix multiplication algorithms efficient. Applications of matrix multiplication in computational problems are found in many fields including scientific computing and pattern recognition and in seemingly unrelated problems such as counting the paths through a graph. Many different algorithms have been designed for multiplying matrices on different types of hardware, including parallel and distributed systems, where the computational work is spread over multiple processors (perhaps over a network).
Eigenvalues and eigenvectorsIn linear algebra, an eigenvector (ˈaɪgənˌvɛktər) or characteristic vector of a linear transformation is a nonzero vector that changes at most by a constant factor when that linear transformation is applied to it. The corresponding eigenvalue, often represented by , is the multiplying factor. Geometrically, a transformation matrix rotates, stretches, or shears the vectors it acts upon. The eigenvectors for a linear transformation matrix are the set of vectors that are only stretched, with no rotation or shear.
Invertible matrixIn linear algebra, an n-by-n square matrix A is called invertible (also nonsingular, nondegenerate or (rarely used) regular), if there exists an n-by-n square matrix B such that where In denotes the n-by-n identity matrix and the multiplication used is ordinary matrix multiplication. If this is the case, then the matrix B is uniquely determined by A, and is called the (multiplicative) inverse of A, denoted by A−1. Matrix inversion is the process of finding the matrix B that satisfies the prior equation for a given invertible matrix A.
Probability vectorIn mathematics and statistics, a probability vector or stochastic vector is a vector with non-negative entries that add up to one. The positions (indices) of a probability vector represent the possible outcomes of a discrete random variable, and the vector gives us the probability mass function of that random variable, which is the standard way of characterizing a discrete probability distribution. Here are some examples of probability vectors. The vectors can be either columns or rows.
Sylvester matrixIn mathematics, a Sylvester matrix is a matrix associated to two univariate polynomials with coefficients in a field or a commutative ring. The entries of the Sylvester matrix of two polynomials are coefficients of the polynomials. The determinant of the Sylvester matrix of two polynomials is their resultant, which is zero when the two polynomials have a common root (in case of coefficients in a field) or a non-constant common divisor (in case of coefficients in an integral domain).
Conjugate transposeIn mathematics, the conjugate transpose, also known as the Hermitian transpose, of an complex matrix is an matrix obtained by transposing and applying complex conjugate on each entry (the complex conjugate of being , for real numbers and ). It is often denoted as or or and very commonly in physics as . For real matrices, the conjugate transpose is just the transpose, . The conjugate transpose of an matrix is formally defined by where the subscript denotes the -th entry, for and , and the overbar denotes a scalar complex conjugate.
Characteristic polynomialIn linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any base (that is, the characteristic polynomial does not depend on the choice of a basis).