The Bayes factor is a ratio of two competing statistical models represented by their evidence, and is used to quantify the support for one model over the other. The models in questions can have a common set of parameters, such as a null hypothesis and an alternative, but this is not necessary; for instance, it could also be a non-linear model compared to its linear approximation. The Bayes factor can be thought of as a Bayesian analog to the likelihood-ratio test, although it uses the (integrated) marginal likelihood rather than the maximized likelihood. As such, both quantities only coincide under simple hypotheses (e.g., two specific parameter values). Also, in contrast with null hypothesis significance testing, Bayes factors support evaluation of evidence in favor of a null hypothesis, rather than only allowing the null to be rejected or not rejected. Although conceptually simple, the computation of the Bayes factor can be challenging depending on the complexity of the model and the hypotheses. Since closed-form expressions of the marginal likelihood are generally not available, numerical approximations based on MCMC samples have been suggested. For certain special cases, simplified algebraic expressions can be derived; for instance, the Savage–Dickey density ratio in the case of a precise (equality constrained) hypothesis against an unrestricted alternative. Another approximation, derived by applying Laplace's approximation to the integrated likelihoods, is known as the Bayesian information criterion (BIC); in large data sets the Bayes factor will approach the BIC as the influence of the priors wanes. In small data sets, priors generally matter and must not be improper since the Bayes factor will be undefined if either of the two integrals in its ratio is not finite. The Bayes factor is the ratio of two marginal likelihoods; that is, the likelihoods of two statistical models integrated over the prior probabilities of their parameters.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related courses (20)
MATH-232: Probability and statistics
A basic course in probability and statistics
BIO-449: Understanding statistics and experimental design
This course is neither an introduction to the mathematics of statistics nor an introduction to a statistics program such as R. The aim of the course is to understand statistics from its experimental d
CS-101: Advanced information, computation, communication I
Discrete mathematics is a discipline with applications to almost all areas of study. It provides a set of indispensable tools to computer science in particular. This course reviews (familiar) topics a
Show more
Related lectures (45)
Inference Problems & Spin Glass Game
Covers inference problems related to the Spin Glass Game and the challenges of making mistakes with preb P.
Probability and Statistics
Introduces probability, statistics, distributions, inference, likelihood, and combinatorics for studying random events and network modeling.
Probability and Statistics
Covers p-quantile, normal approximation, joint distributions, and exponential families in probability and statistics.
Show more
Related publications (39)

Accelerometry as a tool for measuring the effects of transcranial magnetic stimulation

Pierre Theopistos Vassiliadis, Julie Duqué

Objective: We predicted that accelerometry would be a viable alternative to electromyography (EMG) for assessing fundamental Transcranial Magnetic Stimulation (TMS) measurements (e.g. Resting Motor Threshold (RMT), recruitment curves, latencies). New Metho ...
Elsevier2024

Time vs. Truth: Age-Distortion Tradeoffs and Strategies for Distributed Inference

Yunus Inan

In 1948, Claude Shannon laid the foundations of information theory, which grew out of a study to find the ultimate limits of source compression, and of reliable communication. Since then, information theory has proved itself not only as a quest to find the ...
EPFL2023

FANOK: Knockoffs in Linear Time

Quentin Rebjock

We describe a series of algorithms that efficiently implement Gaussian model-X knockoffs to control the false discovery rate on large-scale feature selection problems. Identifying the knockoff distribution requires solving a large-scale semidefinite progra ...
SIAM PUBLICATIONS2021
Show more
Related concepts (17)
Bayesian information criterion
In statistics, the Bayesian information criterion (BIC) or Schwarz information criterion (also SIC, SBC, SBIC) is a criterion for model selection among a finite set of models; models with lower BIC are generally preferred. It is based, in part, on the likelihood function and it is closely related to the Akaike information criterion (AIC). When fitting models, it is possible to increase the maximum likelihood by adding parameters, but doing so may result in overfitting.
Model selection
Model selection is the task of selecting a model from among various candidates on the basis of performance criterion to choose the best one. In the context of learning, this may be the selection of a statistical model from a set of candidate models, given data. In the simplest cases, a pre-existing set of data is considered. However, the task can also involve the design of experiments such that the data collected is well-suited to the problem of model selection.
P-value
In null-hypothesis significance testing, the p-value is the probability of obtaining test results at least as extreme as the result actually observed, under the assumption that the null hypothesis is correct. A very small p-value means that such an extreme observed outcome would be very unlikely under the null hypothesis. Even though reporting p-values of statistical tests is common practice in academic publications of many quantitative fields, misinterpretation and misuse of p-values is widespread and has been a major topic in mathematics and metascience.
Show more

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.