Maximum likelihood estimationIn statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.
Extremum estimatorIn statistics and econometrics, extremum estimators are a wide class of estimators for parametric models that are calculated through maximization (or minimization) of a certain objective function, which depends on the data. The general theory of extremum estimators was developed by . An estimator is called an extremum estimator, if there is an objective function such that where Θ is the parameter space. Sometimes a slightly weaker definition is given: where op(1) is the variable converging in probability to zero.
Validity (logic)In logic, specifically in deductive reasoning, an argument is valid if and only if it takes a form that makes it impossible for the premises to be true and the conclusion nevertheless to be false. It is not required for a valid argument to have premises that are actually true, but to have premises that, if they were true, would guarantee the truth of the argument's conclusion. Valid arguments must be clearly expressed by means of sentences called well-formed formulas (also called wffs or simply formulas).
Pivotal quantityIn statistics, a pivotal quantity or pivot is a function of observations and unobservable parameters such that the function's probability distribution does not depend on the unknown parameters (including nuisance parameters). A pivot quantity need not be a statistic—the function and its value can depend on the parameters of the model, but its distribution must not. If it is a statistic, then it is known as an ancillary statistic. More formally, let be a random sample from a distribution that depends on a parameter (or vector of parameters) .
Material conditionalThe material conditional (also known as material implication) is an operation commonly used in logic. When the conditional symbol is interpreted as material implication, a formula is true unless is true and is false. Material implication can also be characterized inferentially by modus ponens, modus tollens, conditional proof, and classical reductio ad absurdum. Material implication is used in all the basic systems of classical logic as well as some nonclassical logics.
Gentzen's consistency proofGentzen's consistency proof is a result of proof theory in mathematical logic, published by Gerhard Gentzen in 1936. It shows that the Peano axioms of first-order arithmetic do not contain a contradiction (i.e. are "consistent"), as long as a certain other system used in the proof does not contain any contradictions either. This other system, today called "primitive recursive arithmetic with the additional principle of quantifier-free transfinite induction up to the ordinal ε0", is neither weaker nor stronger than the system of Peano axioms.
EquiconsistencyIn mathematical logic, two theories are equiconsistent if the consistency of one theory implies the consistency of the other theory, and vice versa. In this case, they are, roughly speaking, "as consistent as each other". In general, it is not possible to prove the absolute consistency of a theory T. Instead we usually take a theory S, believed to be consistent, and try to prove the weaker statement that if S is consistent then T must also be consistent—if we can do this we say that T is consistent relative to S.
InferenceInferences are steps in reasoning, moving from premises to logical consequences; etymologically, the word infer means to "carry forward". Inference is theoretically traditionally divided into deduction and induction, a distinction that in Europe dates at least to Aristotle (300s BCE). Deduction is inference deriving logical conclusions from premises known or assumed to be true, with the laws of valid inference being studied in logic. Induction is inference from particular evidence to a universal conclusion.
Classical logicClassical logic (or standard logic or Frege-Russell logic) is the intensively studied and most widely used class of deductive logic. Classical logic has had much influence on analytic philosophy. Each logical system in this class shares characteristic properties: Law of excluded middle and double negation elimination Law of noncontradiction, and the principle of explosion Monotonicity of entailment and idempotency of entailment Commutativity of conjunction De Morgan duality: every logical operator is dual to another While not entailed by the preceding conditions, contemporary discussions of classical logic normally only include propositional and first-order logics.
Statistical hypothesis testingA statistical hypothesis test is a method of statistical inference used to decide whether the data at hand sufficiently support a particular hypothesis. Hypothesis testing allows us to make probabilistic statements about population parameters. While hypothesis testing was popularized early in the 20th century, early forms were used in the 1700s. The first use is credited to John Arbuthnot (1710), followed by Pierre-Simon Laplace (1770s), in analyzing the human sex ratio at birth; see .