Real projective spaceIn mathematics, real projective space, denoted \mathbb{RP}^n or \mathbb{P}_n(\R), is the topological space of lines passing through the origin 0 in the real space \R^{n+1}. It is a compact, smooth manifold of dimension n, and is a special case \mathbf{Gr}(1, \R^{n+1}) of a Grassmannian space. As with all projective spaces, RPn is formed by taking the quotient of Rn+1 ∖ under the equivalence relation x ∼ λx for all real numbers λ ≠ 0. For all x in Rn+1 ∖ one can always find a λ such that λx has norm 1.
Homological algebraHomological algebra is the branch of mathematics that studies homology in a general algebraic setting. It is a relatively young discipline, whose origins can be traced to investigations in combinatorial topology (a precursor to algebraic topology) and abstract algebra (theory of modules and syzygies) at the end of the 19th century, chiefly by Henri Poincaré and David Hilbert. Homological algebra is the study of homological functors and the intricate algebraic structures that they entail; its development was closely intertwined with the emergence of .
Differential graded algebraIn mathematics, in particular in homological algebra, a differential graded algebra is a graded associative algebra with an added chain complex structure that respects the algebra structure. TOC A differential graded algebra (or DG-algebra for short) A is a graded algebra equipped with a map which has either degree 1 (cochain complex convention) or degree −1 (chain complex convention) that satisfies two conditions: A more succinct way to state the same definition is to say that a DG-algebra is a monoid object in the .
Algebraic cycleIn mathematics, an algebraic cycle on an algebraic variety V is a formal linear combination of subvarieties of V. These are the part of the algebraic topology of V that is directly accessible by algebraic methods. Understanding the algebraic cycles on a variety can give profound insights into the structure of the variety. The most trivial case is codimension zero cycles, which are linear combinations of the irreducible components of the variety. The first non-trivial case is of codimension one subvarieties, called divisors.
Fixed point (mathematics)hatnote|1=Fixed points in mathematics are not to be confused with other uses of "fixed point", or stationary points where math|1=f(x) = 0. In mathematics, a fixed point (sometimes shortened to fixpoint), also known as an invariant point, is a value that does not change under a given transformation. Specifically for functions, a fixed point is an element that is mapped to itself by the function. Formally, c is a fixed point of a function f if c belongs to both the domain and the codomain of f, and f(c) = c.
Dimension (vector space)In mathematics, the dimension of a vector space V is the cardinality (i.e., the number of vectors) of a basis of V over its base field. It is sometimes called Hamel dimension (after Georg Hamel) or algebraic dimension to distinguish it from other types of dimension. For every vector space there exists a basis, and all bases of a vector space have equal cardinality; as a result, the dimension of a vector space is uniquely defined. We say is if the dimension of is finite, and if its dimension is infinite.
Fixed-point iterationIn numerical analysis, fixed-point iteration is a method of computing fixed points of a function. More specifically, given a function defined on the real numbers with real values and given a point in the domain of , the fixed-point iteration is which gives rise to the sequence of iterated function applications which is hoped to converge to a point . If is continuous, then one can prove that the obtained is a fixed point of , i.e., More generally, the function can be defined on any metric space with values in that same space.
Numerical linear algebraNumerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics. It is a subfield of numerical analysis, and a type of linear algebra. Computers use floating-point arithmetic and cannot exactly represent irrational data, so when a computer algorithm is applied to a matrix of data, it can sometimes increase the difference between a number stored in the computer and the true number that it is an approximation of.
Complex conjugate of a vector spaceIn mathematics, the complex conjugate of a complex vector space is a complex vector space , which has the same elements and additive group structure as but whose scalar multiplication involves conjugation of the scalars. In other words, the scalar multiplication of satisfies where is the scalar multiplication of and is the scalar multiplication of The letter stands for a vector in is a complex number, and denotes the complex conjugate of More concretely, the complex conjugate vector space is the same underlying vector space (same set of points, same vector addition and real scalar multiplication) with the conjugate linear complex structure (different multiplication by ).
Fixed-point theoremIn mathematics, a fixed-point theorem is a result saying that a function F will have at least one fixed point (a point x for which F(x) = x), under some conditions on F that can be stated in general terms. The Banach fixed-point theorem (1922) gives a general criterion guaranteeing that, if it is satisfied, the procedure of iterating a function yields a fixed point.