In geometry, a hyperplane is a subspace whose dimension is one less than that of its ambient space. For example, if a space is 3-dimensional then its hyperplanes are the 2-dimensional planes, while if the space is 2-dimensional, its hyperplanes are the 1-dimensional lines. This notion can be used in any general space in which the concept of the dimension of a subspace is defined.
In different settings, hyperplanes may have different properties. For instance, a hyperplane of an n-dimensional affine space is a flat subset with dimension n − 1 and it separates the space into two half spaces. While a hyperplane of an n-dimensional projective space does not have this property.
The difference in dimension between a subspace S and its ambient space X is known as the codimension of S with respect to X. Therefore, a necessary and sufficient condition for S to be a hyperplane in X is for S to have codimension one in X.
In geometry, a hyperplane of an n-dimensional space V is a subspace of dimension n − 1, or equivalently, of codimension 1 in V. The space V may be a Euclidean space or more generally an affine space, or a vector space or a projective space, and the notion of hyperplane varies correspondingly since the definition of subspace differs in these settings; in all cases however, any hyperplane can be given in coordinates as the solution of a single (due to the "codimension 1" constraint) algebraic equation of degree 1.
If V is a vector space, one distinguishes "vector hyperplanes" (which are linear subspaces, and therefore must pass through the origin) and "affine hyperplanes" (which need not pass through the origin; they can be obtained by translation of a vector hyperplane). A hyperplane in a Euclidean space separates that space into two half spaces, and defines a reflection that fixes the hyperplane and interchanges those two half spaces.
Several specific types of hyperplanes are defined with properties that are well suited for particular purposes. Some of these specializations are described here.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
In mathematics, an affine space is a geometric structure that generalizes some of the properties of Euclidean spaces in such a way that these are independent of the concepts of distance and measure of angles, keeping only the properties related to parallelism and ratio of lengths for parallel line segments. In an affine space, there is no distinguished point that serves as an origin. Hence, no vector has a fixed origin and no vector can be uniquely associated to a point.
In geometry, a three-dimensional space (3D space, 3-space or, rarely, tri-dimensional space) is a mathematical space in which three values (coordinates) are required to determine the position of a point. Most commonly, it is the three-dimensional Euclidean space, the Euclidean n-space of dimension n=3 that models physical space. More general three-dimensional spaces are called 3-manifolds. Technically, a tuple of n numbers can be understood as the Cartesian coordinates of a location in a n-dimensional Euclidean space.
In mathematics, the concept of a projective space originated from the visual effect of perspective, where parallel lines seem to meet at infinity. A projective space may thus be viewed as the extension of a Euclidean space, or, more generally, an affine space with points at infinity, in such a way that there is one point at infinity of each direction of parallel lines. This definition of a projective space has the disadvantage of not being isotropic, having two different sorts of points, which must be considered separately in proofs.
In this course, students learn to design and master algorithms and core concepts related to inference and learning from data and the foundations of adaptation and learning theories with applications.
This course aims to introduce the basic principles of machine learning in the context of the digital humanities. We will cover both supervised and unsupervised learning techniques, and study and imple
In this paper we derive quantitative estimates in the context of stochastic homogenization for integral functionals defined on finite partitions, where the random surface integrand is assumed to be stationary. Requiring the integrand to satisfy in addition ...
2022
, , , , ,
We study how permutation symmetries in overparameterized multi-layer neural networks generate `symmetry-induced' critical points. Assuming a network with L layers of minimal widths r1∗,…,rL−1∗ reaches a zero-loss minimum at $ r_1^*! \c ...
We prove that every online learnable class of functions of Littlestone dimension d admits a learning algorithm with finite information complexity. Towards this end, we use the notion of a globally stable algorithm. Generally, the information complexity of ...