In mathematics, particularly in linear algebra, a skew-symmetric (or antisymmetric or antimetric) matrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition
In terms of the entries of the matrix, if denotes the entry in the -th row and -th column, then the skew-symmetric condition is equivalent to
The matrix
is skew-symmetric because
Throughout, we assume that all matrix entries belong to a field whose characteristic is not equal to 2. That is, we assume that 1 + 1 ≠ 0, where 1 denotes the multiplicative identity and 0 the additive identity of the given field. If the characteristic of the field is 2, then a skew-symmetric matrix is the same thing as a symmetric matrix.
The sum of two skew-symmetric matrices is skew-symmetric.
A scalar multiple of a skew-symmetric matrix is skew-symmetric.
The elements on the diagonal of a skew-symmetric matrix are zero, and therefore its trace equals zero.
If is a real skew-symmetric matrix and is a real eigenvalue, then , i.e. the nonzero eigenvalues of a skew-symmetric matrix are non-real.
If is a real skew-symmetric matrix, then is invertible, where is the identity matrix.
If is a skew-symmetric matrix then is a symmetric negative semi-definite matrix.
As a result of the first two properties above, the set of all skew-symmetric matrices of a fixed size forms a vector space. The space of skew-symmetric matrices has dimension
Let denote the space of matrices. A skew-symmetric matrix is determined by scalars (the number of entries above the main diagonal); a symmetric matrix is determined by scalars (the number of entries on or above the main diagonal). Let denote the space of skew-symmetric matrices and denote the space of symmetric matrices. If then
Notice that and This is true for every square matrix with entries from any field whose characteristic is different from 2. Then, since and
where denotes the direct sum.
Denote by the standard inner product on The real matrix is skew-symmetric if and only if
This is also equivalent to for all (one implication being obvious, the other a plain consequence of for all and ).
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
In mathematics, a matrix (plural matrices) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object. For example, is a matrix with two rows and three columns. This is often referred to as a "two by three matrix", a " matrix", or a matrix of dimension . Without further specifications, matrices represent linear maps, and allow explicit computations in linear algebra.
In mathematics, the exterior algebra, or Grassmann algebra, named after Hermann Grassmann, is an algebra that uses the exterior product or wedge product as its multiplication. In mathematics, the exterior product or wedge product of vectors is an algebraic construction used in geometry to study areas, volumes, and their higher-dimensional analogues. The exterior product of two vectors and , denoted by is called a bivector and lives in a space called the exterior square, a vector space that is distinct from the original space of vectors.
In linear algebra, a square matrix is called diagonalizable or non-defective if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix and a diagonal matrix such that , or equivalently . (Such , are not unique.) For a finite-dimensional vector space , a linear map is called diagonalizable if there exists an ordered basis of consisting of eigenvectors of .
Covers the definition of multivariate Gaussian distribution and its properties, including moment generating function and linear combinations of variables.
A key challenge across many disciplines is to extract meaningful information from data which is often obscured by noise. These datasets are typically represented as large matrices. Given the current trend of ever-increasing data volumes, with datasets grow ...
Given a family of nearly commuting symmetric matrices, we consider the task of computing an orthogonal matrix that nearly diagonalizes every matrix in the family. In this paper, we propose and analyze randomized joint diagonalization (RJD) for performing t ...
Functional connectomes (FCs) containing pairwise estimations of functional couplings between pairs of brain regions are commonly represented by correlation matrices. As symmetric positive definite matrices, FCs can be transformed via tangent space projecti ...