Fiducial inference is one of a number of different types of statistical inference. These are rules, intended for general application, by which conclusions can be drawn from samples of data. In modern statistical practice, attempts to work with fiducial inference have fallen out of fashion in favour of frequentist inference, Bayesian inference and decision theory. However, fiducial inference is important in the history of statistics since its development led to the parallel development of concepts and tools in theoretical statistics that are widely used. Some current research in statistical methodology is either explicitly linked to fiducial inference or is closely connected to it.
The general approach of fiducial inference was proposed by Ronald Fisher. Here "fiducial" comes from the Latin for faith. Fiducial inference can be interpreted as an attempt to perform inverse probability without calling on prior probability distributions. Fiducial inference quickly attracted controversy and was never widely accepted. Indeed, counter-examples to the claims of Fisher for fiducial inference were soon published. These counter-examples cast doubt on the coherence of "fiducial inference" as a system of statistical inference or inductive logic. Other studies showed that, where the steps of fiducial inference are said to lead to "fiducial probabilities" (or "fiducial distributions"), these probabilities lack the property of additivity, and so cannot constitute a probability measure.
The concept of fiducial inference can be outlined by comparing its treatment of the problem of interval estimation in relation to other modes of statistical inference.
A confidence interval, in frequentist inference, with coverage probability γ has the interpretation that among all confidence intervals computed by the same method, a proportion γ will contain the true value that needs to be estimated. This has either a repeated sampling (or frequentist) interpretation, or is the probability that an interval calculated from yet-to-be-sampled data will cover the true value.
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.
Frequentist inference is a type of statistical inference based in frequentist probability, which treats “probability” in equivalent terms to “frequency” and draws conclusions from sample-data by means of emphasizing the frequency or proportion of findings in the data. Frequentist-inference underlies frequentist statistics, in which the well-established methodologies of statistical hypothesis testing and confidence intervals are founded. The primary formulation of frequentism stems from the presumption that statistics could be perceived to have been a probabilistic frequency.
In Bayesian statistics, a credible interval is an interval within which an unobserved parameter value falls with a particular probability. It is an interval in the domain of a posterior probability distribution or a predictive distribution. The generalisation to multivariate problems is the credible region. Credible intervals are analogous to confidence intervals and confidence regions in frequentist statistics, although they differ on a philosophical basis: Bayesian intervals treat their bounds as fixed and the estimated parameter as a random variable, whereas frequentist confidence intervals treat their bounds as random variables and the parameter as a fixed value.
Fiducial inference is one of a number of different types of statistical inference. These are rules, intended for general application, by which conclusions can be drawn from samples of data. In modern statistical practice, attempts to work with fiducial inference have fallen out of fashion in favour of frequentist inference, Bayesian inference and decision theory. However, fiducial inference is important in the history of statistics since its development led to the parallel development of concepts and tools in theoretical statistics that are widely used.
Explore les statistiques non paramétriques, les méthodes bayésiennes et la régression linéaire en mettant l'accent sur l'estimation de la densité du noyau et la distribution postérieure.
Inference from the particular to the general based on probability models is central to the statistical method. This course gives a graduate-level account of the main ideas of statistical inference.
This course covers the statistical physics approach to computer science problems, with an emphasis on heuristic & rigorous mathematical technics, ranging from graph theory and constraint satisfaction