In mathematical logic, fixed-point logics are extensions of classical predicate logic that have been introduced to express recursion. Their development has been motivated by descriptive complexity theory and their relationship to database query languages, in particular to Datalog.
Least fixed-point logic was first studied systematically by Yiannis N. Moschovakis in 1974, and it was introduced to computer scientists in 1979, when Alfred Aho and Jeffrey Ullman suggested fixed-point logic as an expressive database query language.
For a relational signature X, FOPFP is the set of formulas formed from X using first-order connectives and predicates, second-order variables as well as a partial fixed point operator used to form formulas of the form , where is a second-order variable, a tuple of first-order variables, a tuple of terms and the lengths of and coincide with the arity of .
Let k be an integer, be vectors of k variables, P be a second-order variable of arity k, and let φ be an FO(PFP,X) function using x and P as variables. We can iteratively define such that and (meaning φ with substituted for the second-order variable P). Then, either there is a fixed point, or the list of s is cyclic.
is defined as the value of the fixed point of on y if there is a fixed point, else as false. Since Ps are properties of arity k, there are at most values for the s, so with a polynomial-space counter we can check if there is a loop or not.
It has been proven that on ordered finite structures, a property is expressible in FO(PFP,X) if and only if it lies in PSPACE.
Since the iterated predicates involved in calculating the partial fixed point are not in general monotone, the fixed-point may not always exist.
FO(LFP,X), least fixed-point logic, is the set of formulas in FO(PFP,X) where the partial fixed point is taken only over such formulas φ that only contain positive occurrences of P (that is, occurrences preceded by an even number of negations). This guarantees monotonicity of the fixed-point construction (That is, if the second order variable is P, then always implies ).
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Descriptive complexity is a branch of computational complexity theory and of finite model theory that characterizes complexity classes by the type of logic needed to express the languages in them. For example, PH, the union of all complexity classes in the polynomial hierarchy, is precisely the class of languages expressible by statements of second-order logic.
In logic and mathematics, second-order logic is an extension of first-order logic, which itself is an extension of propositional logic. Second-order logic is in turn extended by higher-order logic and type theory. First-order logic quantifies only variables that range over individuals (elements of the domain of discourse); second-order logic, in addition, also quantifies over relations. For example, the second-order sentence says that for every formula P, and every individual x, either Px is true or not(Px) is true (this is the law of excluded middle).