Concept# Symmetric matrix

Summary

In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally,
Because equal matrices have equal dimensions, only square matrices can be symmetric.
The entries of a symmetric matrix are symmetric with respect to the main diagonal. So if a_{ij} denotes the entry in the ith row and jth column then
for all indices i and j.
Every square diagonal matrix is symmetric, since all off-diagonal elements are zero. Similarly in characteristic different from 2, each diagonal element of a skew-symmetric matrix must be zero, since each is its own negative.
In linear algebra, a real symmetric matrix represents a self-adjoint operator represented in an orthonormal basis over a real inner product space. The corresponding object for a complex inner product space is a Hermitian matrix with complex-valued entries, which is equal to its conjugate transpose. Therefore, in linear algebra over the co

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related publications (41)

Loading

Loading

Loading

Related concepts (54)

Related people (3)

Eigenvalues and eigenvectors

In linear algebra, an eigenvector (ˈaɪgənˌvɛktər) or characteristic vector of a linear transformation is a nonzero vector that changes at most by a constant factor when that linear transformation is

Matrix (mathematics)

In mathematics, a matrix (plural matrices) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a pr

Linear algebra

Linear algebra is the branch of mathematics concerning linear equations such as:
:a_1x_1+\cdots +a_nx_n=b,
linear maps such as:
:(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n,

Related courses (59)

MGT-581: Introduction to econometrics

The course provides an introduction to econometrics. The objective is to learn how to make valid (i.e., causal) inference from economic data. It explains the main estimators and present methods to deal with endogeneity issues.

MATH-111(en): Linear algebra (english)

The purpose of the course is to introduce the basic notions of linear algebra and its applications.

MATH-111(e): Linear Algebra

L'objectif du cours est d'introduire les notions de base de l'algèbre linéaire et ses applications.

Related units (3)

In this thesis, we study two distinct problems.
The first problem consists of studying the linear system of partial differential equations which consists of taking a k-form, and applying the exterior derivative 'd' to it and add the wedge product with a 1-form 'a'. The study of this differential operator is linked to the study of the multiplication by a two form, that is the system of linear equations where we take a k-form and apply the exterior wedge product by 'da', the exterior derivative of 'a'. We establish links between the partial differential equation and the linear system.
The second problem is a generalization of the symmetric gradient and the curl equation. The equation of a symmetric gradient consists of taking a vector field, apply the gradient and then add the transpose of the gradient, whereas in the curl equation we subtract the transpose of the gradient. Both can be seen as an equation of the form A * grad u + (grad u)t * A, where A is a symmetric matrix for the case of the symmetric gradient and skew symmetric for the curl equation. We generalize to the case where A verifies no symmetry assumption and more significantly add a Dirichlet condition on the boundary.

In this thesis we address the computation of a spectral decomposition for symmetric
banded matrices. In light of dealing with large-scale matrices, where classical dense
linear algebra routines are not applicable, it is essential to design alternative techniques
that take advantage of data properties. Our approach is based upon exploiting the
underlying hierarchical low-rank structure in the intermediate results. Indeed, we study
the computation in the hierarchically off-diagonal low rank (HODLR) format of two
crucial tools: QR decomposition and spectral projectors, in order to devise a fast spectral
divide-and-conquer method.
In the first part we propose a new method for computing a QR decomposition of a
HODLR matrix, where the factor R is returned in the HODLR format, while Q is given
in a compact WY representation. The new algorithm enjoys linear-polylogarithmic complexity and is norm-wise accurate. Moreover, it maintains the orthogonality of the Q factor.
The approximate computation of spectral projectors is addressed in the second part.
This problem has raised some interest in the context of linear scaling electronic structure
methods. There the presence of small spectral gaps brings difficulties to existing algorithms based on approximate sparsity. We propose a fast method based on a variant of
the QDWH algorithm, and exploit that QDWH applied to a banded input generates a
sequence of matrices that can be efficiently represented in the HODLR format. From
the theoretical side, we provide an analysis of the structure preservation in the final
outcome. More specifically, we derive a priori decay bounds on the singular values in
the off-diagonal blocks of spectral projectors. Consequently, this shows that our method,
based on data-sparsity, brings benefits in terms of memory requirements in comparison to
approximate sparsity approaches, because of its logarithmic dependence on the spectral
gap. Numerical experiments conducted on tridiagonal and banded matrices demonstrate
that the proposed algorithm is robust with respect to the spectral gap and exhibits linear-polylogarithmic complexity. Furthermore, it renders very accurate approximations
to the spectral projectors even for very large matrices.
The last part of this thesis is concerned with developing a fast spectral divide-and-conquer
method in the HODLR format. The idea behind this technique is to recursively split the
spectrum, using invariant subspaces associated with its subsets. This allows to obtain
a complete spectral decomposition by solving the smaller-sized problems. Following
Nakatsukasa and Higham, we combine our method for the fast computation of spectral
projectors with a novel technique for finding a basis for the range of such a HODLR
matrix. The latter strongly relies on properties of spectral projectors, and it is analyzed
theoretically. Numerical results confirm that the method is applicable for large-scale
matrices, and exhibits linear-polylogarithmic complexity.

Related lectures (133)