In mathematics, logic, and computer science, a type theory is the formal presentation of a specific type system, and in general, type theory is the academic study of type systems. Some type theories serve as alternatives to set theory as a foundation of mathematics. Two influential type theories that were proposed as foundations are Alonzo Church's typed λ-calculus and Per Martin-Löf's intuitionistic type theory. Most computerized proof-writing systems use a type theory for their foundation, a common one is Thierry Coquand's Calculus of Inductive Constructions.
History of type theory
Type theory was created to avoid a paradox in a mathematical foundation based on naive set theory and formal logic. Russell's paradox, which was discovered by Bertrand Russell, existed because a set could be defined using "all possible sets", which included itself. Between 1902 and 1908, Bertrand Russell proposed various "theories of type" to fix the problem. By 1908 Russell arrived at a "ramified" theory of types together with an "axiom of reducibility" both of which featured prominently in Whitehead and Russell's Principia Mathematica published between 1910 and 1913. This system avoided Russell's paradox by creating a hierarchy of types and then assigning each concrete mathematical entity to a type. Entities of a given type are built exclusively of subtypes of that type, thus preventing an entity from being defined using itself. Russell's theory of types ruled out the possibility of a set being a member of itself.
Types were not always used in logic. There were other techniques to avoid Russell's paradox. Types did gain a hold when used with one particular logic, Alonzo Church's lambda calculus.
The most famous early example is Church's simply typed lambda calculus. Church's theory of types helped the formal system avoid the Kleene–Rosser paradox that afflicted the original untyped lambda calculus. Church demonstrated that it could serve as a foundation of mathematics and it was referred to as a higher-order logic.
Categories
Official source
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
The course introduces the foundations on which programs and programming languages are built. It introduces syntax, types and semantics as building blocks that together define the properties of a progr
A hands-on introduction to interactive theorem proving, proofs as programs, dependent types, and to the Coq proof assistant. Come learn how to write bug-free code!
This course provides students with a working knowledge of macroeconomic models that explicitly incorporate financial markets. The goal is to develop a broad and analytical framework for analyzing the
In computer science and mathematical logic, a proof assistant or interactive theorem prover is a software tool to assist with the development of formal proofs by human-machine collaboration. This involves some sort of interactive proof editor, or other interface, with which a human can guide the search for proofs, the details of which are stored in, and some steps provided by, a computer. A recent effort within this field is making these tools use artificial intelligence to automate the formalization of ordinary mathematics.
In mathematics and computer science, apply is a function that applies a function to arguments. It is central to programming languages derived from lambda calculus, such as LISP and Scheme, and also in functional languages. It has a role in the study of the denotational semantics of computer programs, because it is a continuous function on complete partial orders. Apply is also a continuous function in homotopy theory, and, indeed underpins the entire theory: it allows a homotopy deformation to be viewed as a continuous path in the space of functions.
Combinatory logic is a notation to eliminate the need for quantified variables in mathematical logic. It was introduced by Moses Schönfinkel and Haskell Curry, and has more recently been used in computer science as a theoretical model of computation and also as a basis for the design of functional programming languages. It is based on combinators, which were introduced by Schönfinkel in 1920 with the idea of providing an analogous way to build up functions—and to remove any mention of variables—particularly in predicate logic.
In computer science, a high-level programming language is a programming language with strong abstraction from the details of the computer. In contrast to low-level programming languages, it may use natural language elements, be easier to use, or may automate (or even hide entirely) significant areas of computing systems (e.g. memory management), making the process of developing a program simpler and more understandable than when using a lower-level language. The amount of abstraction provided defines how "high-level" a programming language is.
In programming language theory, semantics is the rigorous mathematical study of the meaning of programming languages. Semantics assigns computational meaning to valid strings in a programming language syntax. It is closely related to, and often crosses over with, the semantics of mathematical proofs. Semantics describes the processes a computer follows when executing a program in that specific language. This can be shown by describing the relationship between the input and output of a program, or an explanation of how the program will be executed on a certain platform, hence creating a model of computation.
First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects, and allows the use of sentences that contain variables, so that rather than propositions such as "Socrates is a man", one can have expressions in the form "there exists x such that x is Socrates and x is a man", where "there exists" is a quantifier, while x is a variable.
Type inference in the presence of first-class or "impredicative" second-order polymorphism a la System F has been an active research area for several decades, with original works dating back to the end of the 80s. Yet, until now many basic problems remain ...
Assoc Computing Machinery2024
, ,
Data races have long been a notorious problem in concurrent programming. They are subtle to detect, and lead to non-deterministic behaviours. There has been a lot of interest in type systems that statically guarantee data race freedom. Significant progress ...
2024
Motivated by the transfer of proofs between proof systems, and in particular from first order automated theorem provers (ATPs) to interactive theorem provers (ITPs), we specify an extension of the TPTP derivation text format to describe proofs in first-ord ...