Principal component analysisPrincipal component analysis (PCA) is a popular technique for analyzing large datasets containing a high number of dimensions/features per observation, increasing the interpretability of data while preserving the maximum amount of information, and enabling the visualization of multidimensional data. Formally, PCA is a statistical technique for reducing the dimensionality of a dataset. This is accomplished by linearly transforming the data into a new coordinate system where (most of) the variation in the data can be described with fewer dimensions than the initial data.
Pseudo-Riemannian manifoldIn differential geometry, a pseudo-Riemannian manifold, also called a semi-Riemannian manifold, is a differentiable manifold with a metric tensor that is everywhere nondegenerate. This is a generalization of a Riemannian manifold in which the requirement of positive-definiteness is relaxed. Every tangent space of a pseudo-Riemannian manifold is a pseudo-Euclidean vector space. A special case used in general relativity is a four-dimensional Lorentzian manifold for modeling spacetime, where tangent vectors can be classified as timelike, null, and spacelike.
ManifoldIn mathematics, a manifold is a topological space that locally resembles Euclidean space near each point. More precisely, an -dimensional manifold, or -manifold for short, is a topological space with the property that each point has a neighborhood that is homeomorphic to an open subset of -dimensional Euclidean space. One-dimensional manifolds include lines and circles, but not lemniscates. Two-dimensional manifolds are also called surfaces. Examples include the plane, the sphere, and the torus, and also the Klein bottle and real projective plane.
Kernel principal component analysisIn the field of multivariate statistics, kernel principal component analysis (kernel PCA) is an extension of principal component analysis (PCA) using techniques of kernel methods. Using a kernel, the originally linear operations of PCA are performed in a reproducing kernel Hilbert space. Recall that conventional PCA operates on zero-centered data; that is, where is one of the multivariate observations.
Differentiable manifoldIn mathematics, a differentiable manifold (also differential manifold) is a type of manifold that is locally similar enough to a vector space to allow one to apply calculus. Any manifold can be described by a collection of charts (atlas). One may then apply ideas from calculus while working within the individual charts, since each chart lies within a vector space to which the usual rules of calculus apply. If the charts are suitably compatible (namely, the transition from one chart to another is differentiable), then computations done in one chart are valid in any other differentiable chart.
Riemannian manifoldIn differential geometry, a Riemannian manifold or Riemannian space (M, g), so called after the German mathematician Bernhard Riemann, is a real, smooth manifold M equipped with a positive-definite inner product gp on the tangent space TpM at each point p. The family gp of inner products is called a Riemannian metric (or Riemannian metric tensor). Riemannian geometry is the study of Riemannian manifolds. A common convention is to take g to be smooth, which means that for any smooth coordinate chart (U, x) on M, the n2 functions are smooth functions.
Hyperkähler manifoldIn differential geometry, a hyperkähler manifold is a Riemannian manifold endowed with three integrable almost complex structures that are Kähler with respect to the Riemannian metric and satisfy the quaternionic relations . In particular, it is a hypercomplex manifold. All hyperkähler manifolds are Ricci-flat and are thus Calabi–Yau manifolds. Hyperkähler manifolds were defined by Eugenio Calabi in 1979. Equivalently, a hyperkähler manifold is a Riemannian manifold of dimension whose holonomy group is contained in the compact symplectic group Sp(n).
Kähler manifoldIn mathematics and especially differential geometry, a Kähler manifold is a manifold with three mutually compatible structures: a complex structure, a Riemannian structure, and a symplectic structure. The concept was first studied by Jan Arnoldus Schouten and David van Dantzig in 1930, and then introduced by Erich Kähler in 1933. The terminology has been fixed by André Weil.
Curvature of Riemannian manifoldsIn mathematics, specifically differential geometry, the infinitesimal geometry of Riemannian manifolds with dimension greater than 2 is too complicated to be described by a single number at a given point. Riemann introduced an abstract and rigorous way to define curvature for these manifolds, now known as the Riemann curvature tensor. Similar notions have found applications everywhere in differential geometry of surfaces and other objects. The curvature of a pseudo-Riemannian manifold can be expressed in the same way with only slight modifications.
Nonlinear dimensionality reductionNonlinear dimensionality reduction, also known as manifold learning, refers to various related techniques that aim to project high-dimensional data onto lower-dimensional latent manifolds, with the goal of either visualizing the data in the low-dimensional space, or learning the mapping (either from the high-dimensional space to the low-dimensional embedding or vice versa) itself. The techniques described below can be understood as generalizations of linear decomposition methods used for dimensionality reduction, such as singular value decomposition and principal component analysis.