In linear algebra, two n-by-n matrices A and B are called similar if there exists an invertible n-by-n matrix P such that Similar matrices represent the same linear map under two (possibly) different bases, with P being the change of basis matrix. A transformation A ↦ P−1AP is called a similarity transformation or conjugation of the matrix A. In the general linear group, similarity is therefore the same as conjugacy, and similar matrices are also called conjugate; however, in a given subgroup H of the general linear group, the notion of conjugacy may be more restrictive than similarity, since it requires that P be chosen to lie in H. When defining a linear transformation, it can be the case that a change of basis can result in a simpler form of the same transformation. For example, the matrix representing a rotation in R3 when the axis of rotation is not aligned with the coordinate axis can be complicated to compute. If the axis of rotation were aligned with the positive z-axis, then it would simply be where is the angle of rotation. In the new coordinate system, the transformation would be written as where x' and y' are respectively the original and transformed vectors in a new basis containing a vector parallel to the axis of rotation. In the original basis, the transform would be written as where vectors x and y and the unknown transform matrix T are in the original basis. To write T in terms of the simpler matrix, we use the change-of-basis matrix P that transforms x and y as and : Thus, the matrix in the original basis, , is given by . The transform in the original basis is found to be the product of three easy-to-derive matrices. In effect, the similarity transform operates in three steps: change to a new basis (P), perform the simple transformation (S), and change back to the old basis (P−1). Similarity is an equivalence relation on the space of square matrices.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Ontological neighbourhood
Related courses (15)
MATH-111(e): Linear Algebra
L'objectif du cours est d'introduire les notions de base de l'algèbre linéaire et ses applications.
MSE-651: Crystallography of structural phase transformations
The microstructure of many alloys and ceramics are constituted of very fine intricate domains (variants) created by diffusive or displacive phase transformations. The course introduces the crystallogr
MATH-115(a): Advanced linear algebra II - diagonalization
L'objectif du cours est d'introduire les notions de base de l'algèbre linéaire et de démontrer rigoureusement les résultats principaux de ce sujet.
Show more
Related publications (35)
Related concepts (21)
Matrix (mathematics)
In mathematics, a matrix (plural matrices) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object. For example, is a matrix with two rows and three columns. This is often referred to as a "two by three matrix", a " matrix", or a matrix of dimension . Without further specifications, matrices represent linear maps, and allow explicit computations in linear algebra.
Eigenvalues and eigenvectors
In linear algebra, an eigenvector (ˈaɪgənˌvɛktər) or characteristic vector of a linear transformation is a nonzero vector that changes at most by a constant factor when that linear transformation is applied to it. The corresponding eigenvalue, often represented by , is the multiplying factor. Geometrically, a transformation matrix rotates, stretches, or shears the vectors it acts upon. The eigenvectors for a linear transformation matrix are the set of vectors that are only stretched, with no rotation or shear.
Triangular matrix
In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called if all the entries above the main diagonal are zero. Similarly, a square matrix is called if all the entries below the main diagonal are zero. Because matrix equations with triangular matrices are easier to solve, they are very important in numerical analysis. By the LU decomposition algorithm, an invertible matrix may be written as the product of a lower triangular matrix L and an upper triangular matrix U if and only if all its leading principal minors are non-zero.
Show more