Symmetric polynomialIn mathematics, a symmetric polynomial is a polynomial P(X1, X2, ..., Xn) in n variables, such that if any of the variables are interchanged, one obtains the same polynomial. Formally, P is a symmetric polynomial if for any permutation σ of the subscripts 1, 2, ..., n one has P(Xσ(1), Xσ(2), ..., Xσ(n)) = P(X1, X2, ..., Xn). Symmetric polynomials arise naturally in the study of the relation between the roots of a polynomial in one variable and its coefficients, since the coefficients can be given by polynomial expressions in the roots, and all roots play a similar role in this setting.
Polynomial long divisionIn algebra, polynomial long division is an algorithm for dividing a polynomial by another polynomial of the same or lower degree, a generalized version of the familiar arithmetic technique called long division. It can be done easily by hand, because it separates an otherwise complex division problem into smaller ones. Sometimes using a shorthand version called synthetic division is faster, with less writing and fewer calculations. Another abbreviated method is polynomial short division (Blomqvist's method).
AlgorithmIn mathematics and computer science, an algorithm (ˈælɡərɪðəm) is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. More advanced algorithms can use conditionals to divert the code execution through various routes (referred to as automated decision-making) and deduce valid inferences (referred to as automated reasoning), achieving automation eventually.
Polynomial-time reductionIn computational complexity theory, a polynomial-time reduction is a method for solving one problem using another. One shows that if a hypothetical subroutine solving the second problem exists, then the first problem can be solved by transforming or reducing it to inputs for the second problem and calling the subroutine one or more times. If both the time required to transform the first problem to the second, and the number of times the subroutine is called is polynomial, then the first problem is polynomial-time reducible to the second.
Fixed-point theoremIn mathematics, a fixed-point theorem is a result saying that a function F will have at least one fixed point (a point x for which F(x) = x), under some conditions on F that can be stated in general terms. The Banach fixed-point theorem (1922) gives a general criterion guaranteeing that, if it is satisfied, the procedure of iterating a function yields a fixed point.
Fixed point (mathematics)hatnote|1=Fixed points in mathematics are not to be confused with other uses of "fixed point", or stationary points where math|1=f(x) = 0. In mathematics, a fixed point (sometimes shortened to fixpoint), also known as an invariant point, is a value that does not change under a given transformation. Specifically for functions, a fixed point is an element that is mapped to itself by the function. Formally, c is a fixed point of a function f if c belongs to both the domain and the codomain of f, and f(c) = c.
Unique factorization domainIn mathematics, a unique factorization domain (UFD) (also sometimes called a factorial ring following the terminology of Bourbaki) is a ring in which a statement analogous to the fundamental theorem of arithmetic holds. Specifically, a UFD is an integral domain (a nontrivial commutative ring in which the product of any two non-zero elements is non-zero) in which every non-zero non-unit element can be written as a product of prime elements (or irreducible elements), uniquely up to order and units.
Algebraic equationIn mathematics, an algebraic equation or polynomial equation is an equation of the form where P is a polynomial with coefficients in some field, often the field of the rational numbers. For many authors, the term algebraic equation refers only to univariate equations, that is polynomial equations that involve only one variable. On the other hand, a polynomial equation may involve several variables. In the case of several variables (the multivariate case), the term polynomial equation is usually preferred to algebraic equation.
Lefschetz fixed-point theoremIn mathematics, the Lefschetz fixed-point theorem is a formula that counts the fixed points of a continuous mapping from a compact topological space to itself by means of traces of the induced mappings on the homology groups of . It is named after Solomon Lefschetz, who first stated it in 1926. The counting is subject to an imputed multiplicity at a fixed point called the fixed-point index. A weak version of the theorem is enough to show that a mapping without any fixed point must have rather special topological properties (like a rotation of a circle).
Time complexityIn computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor.