In linear algebra, a nilpotent matrix is a square matrix N such that
for some positive integer . The smallest such is called the index of , sometimes the degree of .
More generally, a nilpotent transformation is a linear transformation of a vector space such that for some positive integer (and thus, for all ). Both of these concepts are special cases of a more general concept of nilpotence that applies to elements of rings.
The matrix
is nilpotent with index 2, since .
More generally, any -dimensional triangular matrix with zeros along the main diagonal is nilpotent, with index . For example, the matrix
is nilpotent, with
The index of is therefore 4.
Although the examples above have a large number of zero entries, a typical nilpotent matrix does not. For example,
although the matrix has no zero entries.
Additionally, any matrices of the form
such as
or
square to zero.
Perhaps some of the most striking examples of nilpotent matrices are square matrices of the form:
The first few of which are:
These matrices are nilpotent but there are no zero entries in any powers of them less than the index.
Consider the linear space of polynomials of a bounded degree. The derivative operator is a linear map. We know that applying the derivative to a polynomial decreases its degree by one, so when applying it iteratively, we will eventually obtain zero. Therefore, on such a space, the derivative is representable by a nilpotent matrix.
For an square matrix with real (or complex) entries, the following are equivalent:
is nilpotent.
The characteristic polynomial for is .
The minimal polynomial for is for some positive integer .
The only complex eigenvalue for is 0.
The last theorem holds true for matrices over any field of characteristic 0 or sufficiently large characteristic. (cf. Newton's identities)
This theorem has several consequences, including:
The index of an nilpotent matrix is always less than or equal to . For example, every nilpotent matrix squares to zero.
The determinant and trace of a nilpotent matrix are always zero.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
The microstructure of many alloys and ceramics are constituted of very fine intricate domains (variants) created by diffusive or displacive phase transformations. The course introduces the crystallogr
This course offers an introduction to control systems using communication networks for interfacing sensors, actuators, controllers, and processes. Challenges due to network non-idealities and opportun
In linear algebra, the minimal polynomial μA of an n × n matrix A over a field F is the monic polynomial P over F of least degree such that P(A) = 0. Any other polynomial Q with Q(A) = 0 is a (polynomial) multiple of μA. The following three statements are equivalent: λ is a root of μA, λ is a root of the characteristic polynomial χA of A, λ is an eigenvalue of matrix A. The multiplicity of a root λ of μA is the largest power m such that ker((A − λIn)m) strictly contains ker((A − λIn)m−1).
In the mathematical discipline of linear algebra, the Schur decomposition or Schur triangulation, named after Issai Schur, is a matrix decomposition. It allows one to write an arbitrary complex square matrix as unitarily equivalent to an upper triangular matrix whose diagonal elements are the eigenvalues of the original matrix. The Schur decomposition reads as follows: if A is an n × n square matrix with complex entries, then A can be expressed as where Q is a unitary matrix (so that its inverse Q−1 is also the conjugate transpose Q* of Q), and U is an upper triangular matrix, which is called a Schur form of A.
In linear algebra, a Jordan normal form, also known as a Jordan canonical form (JCF), is an upper triangular matrix of a particular form called a Jordan matrix representing a linear operator on a finite-dimensional vector space with respect to some basis. Such a matrix has each non-zero off-diagonal entry equal to 1, immediately above the main diagonal (on the superdiagonal), and with identical diagonal entries to the left and below them. Let V be a vector space over a field K.
Ulam asked whether every connected Lie group can be represented on a countable structure. This is known in the linear case. We establish it for the first family of non-linear groups, namely in the nilpotent case. Further context is discussed to illustrate ...
This paper introduces a novel method for data-driven robust control of nonlinear systems based on the Koopman operator, utilizing Integral Quadratic Constraints (IQCs). The Koopman operator theory facilitates the linear representation of nonlinear system d ...
A new approach is presented to obtain a convex set of robust D—stabilizing fixed structure controllers, relying on Cauchy's argument principle. A convex set of D—stabilizing controllers around an initial D—stabilizing controller for a multi-model set is re ...