Concept# Frequentist probability

Résumé

Frequentist probability or frequentism is an interpretation of probability; it defines an event's probability as the limit of its relative frequency in many trials (the long-run probability). Probabilities can be found (in principle) by a repeatable objective process (and are thus ideally devoid of opinion). The continued use of frequentist methods in scientific inference, however, has been called into question.
The development of the frequentist account was motivated by the problems and paradoxes of the previously dominant viewpoint, the classical interpretation. In the classical interpretation, probability was defined in terms of the principle of indifference, based on the natural symmetry of a problem, so, e.g. the probabilities of dice games arise from the natural symmetric 6-sidedness of the cube. This classical interpretation stumbled at any statistical problem that has no natural symmetry for reasoning.
Definition
In the frequentist interpretation, probabilities are

Source officielle

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Publications associées

Chargement

Personnes associées

Chargement

Unités associées

Chargement

Concepts associés

Chargement

Cours associés

Chargement

Séances de cours associées

Chargement

Personnes associées

Publications associées (3)

Unités associées

Aucun résultat

Aucun résultat

Chargement

Chargement

Chargement

Concepts associés (13)

Bayesian probability

Bayesian probability (ˈbeɪziən or ˈbeɪʒən ) is an interpretation of the concept of probability, in which, instead of frequency or propensity of some phenomenon, probability is interpreted as reasonab

Probabilité

vignette|Quatre dés à six faces de quatre couleurs différentes. Les six faces possibles sont visibles.
Le terme probabilité possède plusieurs sens : venu historiquement du latin probabilitas, il désig

Test statistique

En statistiques, un test, ou test d'hypothèse, est une procédure de décision entre deux hypothèses. Il s'agit d'une démarche consistant à rejeter ou à ne pas rejeter une hypothèse statistique, appelé

Cours associés (11)

MATH-442: Statistical theory

The course aims at developing certain key aspects of the theory of statistics, providing a common general framework for statistical methodology. While the main emphasis will be on the mathematical aspects of statistics, an effort will be made to balance rigor and intuition.

FIN-415: Probability and stochastic calculus

This course gives an introduction to probability theory and stochastic calculus in discrete and continuous time. We study fundamental notions and techniques necessary for applications in finance such as option pricing, hedging, optimal portfolio choice and prediction problems.

BIO-449: Understanding statistics and experimental design

This course is neither an introduction to the mathematics of statistics nor an introduction to a statistics program such as R. The aim of the course is to understand statistics from its experimental design and to avoid common pitfalls of statistical reasoning. There is space to discuss ongoing work.

Séances de cours associées (19)

Frequentist and Bayesian approaches to statistics have long been seen as incompatible, but recent work has been done to try and unify them (Bayarri and Berger, 2004; Efron, 2005). Empirical Bayes, approximate Bayesian analysis, and the matching prior approach are examples of methods where prior elicitation is driven by a frequentist interpretation of the data. In this report, we review some standard frequentist and Bayesian model selection techniques and describe how objective Bayes theory can help in this decision-oriented framework, typically by enabling consideration of uncertainty in the model-building process. Objective Bayes mehtods are then used and compared with standard model selection methods on data from the financial sector in Switzerland, Greece, and the United States during 1999 to 2013; results show broad agreement between the methods, but conclusions are less clear-cut in the objective Bayes framework.

2015We consider finite horizon reach-avoid problems for discrete time stochastic systems. Our goal is to construct upper bound functions for the reach-avoid probability by means of tractable convex optimization problems. We achieve this by restricting attention to the span of Gaussian radial basis functions and imposing structural assumptions on the transition kernel of the stochastic processes as well as the target and safe sets of the reach-avoid problem. In particular, we require the kernel to be written as a Gaussian mixture density with each mean of the distribution being affine in the current state and input and the target and safe sets to be written as intersections of quadratic inequalities. Taking advantage of these structural assumptions, we formulate a recursion of semidefinite programs where each step provides an upper bound to the value function of the reach- avoid problem. The upper bounds provide a performance metric to which any suboptimal control policy can be compared, and can themselves be used to construct suboptimal control policies. We illustrate via numerical examples that even if the resulting bounds are conservative, the associated control policies achieve higher reach-avoid probabilities than heuristic controllers for problems of large state-input space dimensions (more than 20). The results presented in this paper, far exceed the limits of current approximation methods for reach-avoid problems in the specific class of stochastic systems considered.

2015Model specification is an integral part of any statistical inference problem. Several model selection techniques have been developed in order to determine which model is the best one among a list of possible candidates. Another way to deal with this question is the so-called model averaging, and in particular the frequentist approach. An estimation of the parameters of interest is obtained by constructing a weighted average of the estimates of these quantities under each candidate model. We develop compromise frequentist strategies for the estimation of regression parameters, as well as for the probabilistic clustering problem. In the regression context, we construct compromise strategies based on the Pitman estimators associated with various underlying errors distributions. The weight given to each model is equal to its profile likelihood, which gives a measure of the goodness-of-fit. Asymptotic properties of both Pitman estimators and profile likelihood allow us to define a minimax strategy for choosing the distributions of the compromise, involving a notion of distance between distributions. Performances of such estimators are then compared to other usual and robust procedures. In the second part of the thesis, we develop compromise strategies in the probabilistic clustering context. Although this clustering method is based on mixtures of distributions, our compromise strategies are not applied directly to the estimates of the parameters, but on the posterior probabilities of membership. Two types of compromise are presented, and the performances of resulting classification rules are investigated.