In mathematics, a matrix norm is a vector norm in a vector space whose elements (vectors) are matrices (of given dimensions).
Given a field of either real or complex numbers, let be the K-vector space of matrices with rows and columns and entries in the field . A matrix norm is a norm on .
This article will always write such norms with double vertical bars (like so: ). Thus, the matrix norm is a function that must satisfy the following properties:
For all scalars and matrices ,
(positive-valued)
(definite)
(absolutely homogeneous)
(sub-additive or satisfying the triangle inequality)
The only feature distinguishing matrices from rearranged vectors is multiplication. Matrix norms are particularly useful if they are also sub-multiplicative:
Every norm on Kn×n can be rescaled to be sub-multiplicative; in some books, the terminology matrix norm is reserved for sub-multiplicative norms.
Operator norm
Suppose a vector norm on and a vector norm on are given. Any matrix A induces a linear operator from to with respect to the standard basis, and one defines the corresponding induced norm or operator norm or subordinate norm on the space of all matrices as follows:
where denotes the supremum. This norm measures how much the mapping induced by can stretch vectors.
Depending on the vector norms , used, notation other than can be used for the operator norm.
If the p-norm for vectors () is used for both spaces and then the corresponding operator norm is:
These induced norms are different from the "entry-wise" p-norms and the Schatten p-norms for matrices treated below, which are also usually denoted by
In the special cases of the induced matrix norms can be computed or estimated by
which is simply the maximum absolute column sum of the matrix;
which is simply the maximum absolute row sum of the matrix.
For example, for
we have that
In the special case of (the Euclidean norm or -norm for vectors), the induced matrix norm is the spectral norm. (The two values do not coincide in infinite dimensions — see Spectral radius for further discussion.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
In mathematics, a matrix (plural matrices) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object. For example, is a matrix with two rows and three columns. This is often referred to as a "two by three matrix", a " matrix", or a matrix of dimension . Without further specifications, matrices represent linear maps, and allow explicit computations in linear algebra.
In mathematics, the spectral radius of a square matrix is the maximum of the absolute values of its eigenvalues. More generally, the spectral radius of a bounded linear operator is the supremum of the absolute values of the elements of its spectrum. The spectral radius is often denoted by ρ(·). Let λ1, ..., λn be the eigenvalues of a matrix A ∈ Cn×n. The spectral radius of A is defined as The spectral radius can be thought of as an infimum of all norms of a matrix.
In mathematics, in particular functional analysis, the singular values, or s-numbers of a compact operator acting between Hilbert spaces and , are the square roots of the (necessarily non-negative) eigenvalues of the self-adjoint operator (where denotes the adjoint of ). The singular values are non-negative real numbers, usually listed in decreasing order (σ1(T), σ2(T), ...). The largest singular value σ1(T) is equal to the operator norm of T (see Min-max theorem).
We discuss a set of topics that are important for the understanding of modern data science but that are typically not taught in an introductory ML course. In particular we discuss fundamental ideas an
This course provides an overview of key advances in continuous optimization and statistical analysis for machine learning. We review recent learning formulations and models as well as their guarantees
This paper introduces a novel method for data-driven robust control of nonlinear systems based on the Koopman operator, utilizing Integral Quadratic Constraints (IQCs). The Koopman operator theory facilitates the linear representation of nonlinear system d ...
The increasing complexity of transformer models in artificial intelligence expands their computational costs, memory usage, and energy consumption. Hardware acceleration tackles the ensuing challenges by designing processors and accelerators tailored for t ...
2024
In this thesis, we propose to formally derive amplitude equations governing the weakly nonlinear evolution of non-normal dynamical systems, when they respond to harmonic or stochastic forcing, or to an initial condition. This approach reconciles the non-mo ...