Related concepts (31)
Skew-symmetric matrix
In mathematics, particularly in linear algebra, a skew-symmetric (or antisymmetric or antimetric) matrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition In terms of the entries of the matrix, if denotes the entry in the -th row and -th column, then the skew-symmetric condition is equivalent to The matrix is skew-symmetric because Throughout, we assume that all matrix entries belong to a field whose characteristic is not equal to 2.
Trace (linear algebra)
In linear algebra, the trace of a square matrix A, denoted tr(A), is defined to be the sum of elements on the main diagonal (from the upper left to the lower right) of A. The trace is only defined for a square matrix (n × n). It can be proven that the trace of a matrix is the sum of its (complex) eigenvalues (counted with multiplicities). It can also be proven that tr(AB) = tr(BA) for any two matrices A and B. This implies that similar matrices have the same trace.
Minor (linear algebra)
In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices (first minors) are required for calculating matrix cofactors, which in turn are useful for computing both the determinant and inverse of square matrices. The requirement that the square matrix be smaller than the original matrix is often omitted in the definition.
Zero matrix
In mathematics, particularly linear algebra, a zero matrix or null matrix is a matrix all of whose entries are zero. It also serves as the additive identity of the additive group of matrices, and is denoted by the symbol or followed by subscripts corresponding to the dimension of the matrix as the context sees fit. Some examples of zero matrices are The set of matrices with entries in a ring K forms a ring . The zero matrix in is the matrix with all entries equal to , where is the additive identity in K.
Laplace expansion
In linear algebra, the Laplace expansion, named after Pierre-Simon Laplace, also called cofactor expansion, is an expression of the determinant of an n × n matrix B as a weighted sum of minors, which are the determinants of some (n − 1) × (n − 1) submatrices of B. Specifically, for every i, where is the entry of the ith row and jth column of B, and is the determinant of the submatrix obtained by removing the ith row and the jth column of B. The term is called the cofactor of in B.
Main diagonal
In linear algebra, the main diagonal (sometimes principal diagonal, primary diagonal, leading diagonal, major diagonal, or good diagonal) of a matrix is the list of entries where . All off-diagonal elements are zero in a diagonal matrix. The following four matrices have their main diagonals indicated by red ones: Anti-diagonal matrix The antidiagonal (sometimes counter diagonal, secondary diagonal, trailing diagonal, minor diagonal, off diagonal, or bad diagonal) of an order square matrix is the collection of entries such that for all .
Leibniz formula for determinants
In algebra, the Leibniz formula, named in honor of Gottfried Leibniz, expresses the determinant of a square matrix in terms of permutations of the matrix elements. If is an matrix, where is the entry in the -th row and -th column of , the formula is where is the sign function of permutations in the permutation group , which returns and for even and odd permutations, respectively. Another common notation used for the formula is in terms of the Levi-Civita symbol and makes use of the Einstein summation notation, where it becomes which may be more familiar to physicists.
Adjugate matrix
In linear algebra, the adjugate or classical adjoint of a square matrix A is the transpose of its cofactor matrix and is denoted by adj(A). It is also occasionally known as adjunct matrix, or "adjoint", though the latter term today normally refers to a different concept, the adjoint operator which for a matrix is the conjugate transpose. The product of a matrix with its adjugate gives a diagonal matrix (entries not on the main diagonal are zero) whose diagonal entries are the determinant of the original matrix: where I is the identity matrix of the same size as A.
Matrix congruence
In mathematics, two square matrices A and B over a field are called congruent if there exists an invertible matrix P over the same field such that PTAP = B where "T" denotes the matrix transpose. Matrix congruence is an equivalence relation. Matrix congruence arises when considering the effect of change of basis on the Gram matrix attached to a bilinear form or quadratic form on a finite-dimensional vector space: two matrices are congruent if and only if they represent the same bilinear form with respect to different bases.
Indeterminate (variable)
In mathematics, particularly in formal algebra, an indeterminate is a symbol that is treated as a variable, but does not stand for anything else except itself. It may be used as a placeholder in objects such as polynomials and formal power series. In particular: It does not designate a constant or a parameter of the problem. It is not an unknown that could be solved for. It is not a variable designating a function argument, or a variable being summed or integrated over. It is not any type of bound variable.

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.