In mathematics, a canonical basis is a basis of an algebraic structure that is canonical in a sense that depends on the precise context:
In a coordinate space, and more generally in a free module, it refers to the standard basis defined by the Kronecker delta.
In a polynomial ring, it refers to its standard basis given by the monomials, .
For finite extension fields, it means the polynomial basis.
In linear algebra, it refers to a set of n linearly independent generalized eigenvectors of an n×n matrix , if the set is composed entirely of Jordan chains.
In representation theory, it refers to the basis of the quantum groups introduced by Lusztig.
The canonical basis for the irreducible representations of a quantized enveloping algebra of
type and also for the plus part of that algebra was introduced by Lusztig by
two methods: an algebraic one (using a braid group action and PBW bases) and a topological one
(using intersection cohomology). Specializing the parameter to yields a canonical basis for the irreducible representations of the corresponding simple Lie algebra, which was
not known earlier. Specializing the parameter to yields something like a shadow of a basis. This shadow (but not the basis itself) for the case of irreducible representations
was considered independently by Kashiwara; it is sometimes called the crystal basis.
The definition of the canonical basis was extended to the Kac-Moody setting by Kashiwara (by an algebraic method) and by Lusztig (by a topological method).
There is a general concept underlying these bases:
Consider the ring of integral Laurent polynomials with its two subrings and the automorphism defined by .
A precanonical structure on a free -module consists of
A standard basis of ,
An interval finite partial order on , that is, is finite for all ,
A dualization operation, that is, a bijection of order two that is -semilinear and will be denoted by as well.
If a precanonical structure is given, then one can define the submodule of .
A canonical basis of the precanonical structure is then a -basis of that satisfies:
and
for all .
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
In linear algebra, a generalized eigenvector of an matrix is a vector which satisfies certain criteria which are more relaxed than those for an (ordinary) eigenvector. Let be an -dimensional vector space and let be the matrix representation of a linear map from to with respect to some ordered basis. There may not always exist a full set of linearly independent eigenvectors of that form a complete basis for . That is, the matrix may not be diagonalizable.
In linear algebra, an eigenvector (ˈaɪgənˌvɛktər) or characteristic vector of a linear transformation is a nonzero vector that changes at most by a constant factor when that linear transformation is applied to it. The corresponding eigenvalue, often represented by , is the multiplying factor. Geometrically, a transformation matrix rotates, stretches, or shears the vectors it acts upon. The eigenvectors for a linear transformation matrix are the set of vectors that are only stretched, with no rotation or shear.
In mathematics, a matrix (plural matrices) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object. For example, is a matrix with two rows and three columns. This is often referred to as a "two by three matrix", a " matrix", or a matrix of dimension . Without further specifications, matrices represent linear maps, and allow explicit computations in linear algebra.
Let K be an algebraically closed field of characteristic zero, and let G be a connected reductive algebraic group over K. We address the problem of classifying triples (G, H, V ), where H is a proper connected subgroup of G, and V is a finitedimensional ir ...
In this thesis we consider the problem of estimating the correlation of Hecke eigenvalues of GL2 automorphic forms with a class of functions of algebraic origin defined over finite fields called trace functions. The class of trace functions is vast and inc ...
Let (?(f) (n))(n=1) be the Hecke eigenvalues of either a holomorphic Hecke eigencuspform or a Hecke-Maass cusp form f. We prove that, for any fixed ? > 0, under the Ramanujan-Petersson conjecture for GL(2) Maass forms, the Rankin-Selberg coefficients (?(f) ...