Hilbert spaceIn mathematics, Hilbert spaces (named after David Hilbert) allow the methods of linear algebra and calculus to be generalized from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as function spaces. Formally, a Hilbert space is a vector space equipped with an inner product that induces a distance function for which the space is a complete metric space.
Affine spaceIn mathematics, an affine space is a geometric structure that generalizes some of the properties of Euclidean spaces in such a way that these are independent of the concepts of distance and measure of angles, keeping only the properties related to parallelism and ratio of lengths for parallel line segments. In an affine space, there is no distinguished point that serves as an origin. Hence, no vector has a fixed origin and no vector can be uniquely associated to a point.
Linear subspaceIn mathematics, and more specifically in linear algebra, a linear subspace or vector subspace is a vector space that is a subset of some larger vector space. A linear subspace is usually simply called a subspace when the context serves to distinguish it from other types of subspaces. If V is a vector space over a field K and if W is a subset of V, then W is a linear subspace of V if under the operations of V, W is a vector space over K.
Zorn's lemmaZorn's lemma, also known as the Kuratowski–Zorn lemma, is a proposition of set theory. It states that a partially ordered set containing upper bounds for every chain (that is, every totally ordered subset) necessarily contains at least one maximal element. The lemma was proved (assuming the axiom of choice) by Kazimierz Kuratowski in 1922 and independently by Max Zorn in 1935.
Linear combinationIn mathematics, a linear combination is an expression constructed from a set of terms by multiplying each term by a constant and adding the results (e.g. a linear combination of x and y would be any expression of the form ax + by, where a and b are constants). The concept of linear combinations is central to linear algebra and related fields of mathematics. Most of this article deals with linear combinations in the context of a vector space over a field, with some generalizations given at the end of the article.
Linear independenceIn the theory of vector spaces, a set of vectors is said to be if there exists no nontrivial linear combination of the vectors that equals the zero vector. If such a linear combination exists, then the vectors are said to be . These concepts are central to the definition of dimension. A vector space can be of finite dimension or infinite dimension depending on the maximum number of linearly independent vectors. The definition of linear dependence and the ability to determine whether a subset of vectors in a vector space is linearly dependent are central to determining the dimension of a vector space.
Module (mathematics)In mathematics, a module is a generalization of the notion of vector space in which the field of scalars is replaced by a ring. The concept of module generalizes also the notion of abelian group, since the abelian groups are exactly the modules over the ring of integers. Like a vector space, a module is an additive abelian group, and scalar multiplication is distributive over the operation of addition between elements of the ring or module and is compatible with the ring multiplication.
Convex hullIn geometry, the convex hull or convex envelope or convex closure of a shape is the smallest convex set that contains it. The convex hull may be defined either as the intersection of all convex sets containing a given subset of a Euclidean space, or equivalently as the set of all convex combinations of points in the subset. For a bounded subset of the plane, the convex hull may be visualized as the shape enclosed by a rubber band stretched around the subset. Convex hulls of open sets are open, and convex hulls of compact sets are compact.
Orthonormal basisIn mathematics, particularly linear algebra, an orthonormal basis for an inner product space V with finite dimension is a basis for whose vectors are orthonormal, that is, they are all unit vectors and orthogonal to each other. For example, the standard basis for a Euclidean space is an orthonormal basis, where the relevant inner product is the dot product of vectors. The of the standard basis under a rotation or reflection (or any orthogonal transformation) is also orthonormal, and every orthonormal basis for arises in this fashion.
Standard basisIn mathematics, the standard basis (also called natural basis or canonical basis) of a coordinate vector space (such as or ) is the set of vectors, each of whose components are all zero, except one that equals 1. For example, in the case of the Euclidean plane formed by the pairs (x, y) of real numbers, the standard basis is formed by the vectors Similarly, the standard basis for the three-dimensional space is formed by vectors Here the vector ex points in the x direction, the vector ey points in the y direction, and the vector ez points in the z direction.