In mathematics, a logarithm of a matrix is another matrix such that the matrix exponential of the latter matrix equals the original matrix. It is thus a generalization of the scalar logarithm and in some sense an inverse function of the matrix exponential. Not all matrices have a logarithm and those matrices that do have a logarithm may have more than one logarithm. The study of logarithms of matrices leads to Lie theory since when a matrix has a logarithm then it is in an element of a Lie group and the logarithm is the corresponding element of the vector space of the Lie algebra.
The exponential of a matrix A is defined by
Given a matrix B, another matrix A is said to be a matrix logarithm of B if eA = B. Because the exponential function is not bijective for complex numbers (e.g. ), numbers can have multiple complex logarithms, and as a consequence of this, some matrices may have more than one logarithm, as explained below.
If B is sufficiently close to the identity matrix, then a logarithm of B may be computed by means of the following power series:
Specifically, if , then the preceding series converges and .
The rotations in the plane give a simple example. A rotation of angle α around the origin is represented by the 2×2-matrix
For any integer n, the matrix
is a logarithm of A.
⇔
where
...
qed.
Thus, the matrix A has infinitely many logarithms. This corresponds to the fact that the rotation angle is only determined up to multiples of 2π.
In the language of Lie theory, the rotation matrices A are elements of the Lie group SO(2). The corresponding logarithms B are elements of the Lie algebra so(2), which consists of all skew-symmetric matrices. The matrix
is a generator of the Lie algebra so(2).
The question of whether a matrix has a logarithm has the easiest answer when considered in the complex setting. A complex matrix has a logarithm if and only if it is invertible. The logarithm is not unique, but if a matrix has no negative real eigenvalues, then there is a unique logarithm that has eigenvalues all lying in the strip .
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
This course provides an overview of advanced techniques for solving large-scale linear algebra problems, as they typically arise in applications. A central goal of this course is to give the ability t
We will discuss the basic structure of Lie groups and of their associated Lie algebras along with their finite dimensional representations and with a special emphasis on matrix Lie groups.
In mathematics, a matrix (plural matrices) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object. For example, is a matrix with two rows and three columns. This is often referred to as a "two by three matrix", a " matrix", or a matrix of dimension . Without further specifications, matrices represent linear maps, and allow explicit computations in linear algebra.
In mathematics, every analytic function can be used for defining a matrix function that maps square matrices with complex entries to square matrices of the same size. This is used for defining the exponential of a matrix, which is involved in the closed-form solution of systems of linear differential equations. There are several techniques for lifting a real function to a square matrix function such that interesting properties are maintained. All of the following techniques yield the same matrix function, but the domains on which the function is defined may differ.
In mathematics, the polar decomposition of a square real or complex matrix is a factorization of the form , where is a unitary matrix and is a positive semi-definite Hermitian matrix ( is an orthogonal matrix and is a positive semi-definite symmetric matrix in the real case), both square and of the same size. Intuitively, if a real matrix is interpreted as a linear transformation of -dimensional space , the polar decomposition separates it into a rotation or reflection of , and a scaling of the space along a set of orthogonal axes.
This work is concerned with the computation of the action of a matrix function f(A), such as the matrix exponential or the matrix square root, on a vector b. For a general matrix A, this can be done by computing the compression of A onto a suitable Krylov ...
Mega-events have been considered as a strategy to boost urban development, which shifts targets from expansion outside of the original city fabric to reuse of heritage. The bidirectional dynamic between mega-events and heritage has been emphasized, as the ...
2022
, ,
The lattice Green's function method (LGFM) is the discrete counterpart of the continuum boundary element method and is a natural approach for solving intrinsically discrete solid mechanics problems that arise in atomistic-continuum coupling methods. Here, ...