**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Lecture# Assumption-lean Inference: Generalised Linear Model Parameters

Description

This lecture by the instructor focuses on the challenges of model misspecification in statistical inference, proposing a shift from model-based to estimand-based approaches. The discussion covers the definition of estimands, main effect estimands, interaction estimands, and survival analysis estimands. The lecture emphasizes the importance of choosing estimands that are well-defined and generic, allowing for robust inference. Various criteria for selecting estimands are explored, along with the limitations of plug-in estimators and the benefits of data-adaptive inference. The presentation concludes by highlighting the advantages of assumption-lean inference and the implications for practical applications.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts (49)

Related lectures (9)

Estimator

In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguished. For example, the sample mean is a commonly used estimator of the population mean. There are point and interval estimators. The point estimators yield single-valued results. This is in contrast to an interval estimator, where the result would be a range of plausible values.

Bias of an estimator

In statistics, the bias of an estimator (or bias function) is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called unbiased. In statistics, "bias" is an property of an estimator. Bias is a distinct concept from consistency: consistent estimators converge in probability to the true value of the parameter, but may be biased or unbiased; see bias versus consistency for more.

Statistical inference

Statistical inference is the process of using data analysis to infer properties of an underlying distribution of probability. Inferential statistical analysis infers properties of a population, for example by testing hypotheses and deriving estimates. It is assumed that the observed data set is sampled from a larger population. Inferential statistics can be contrasted with descriptive statistics. Descriptive statistics is solely concerned with properties of the observed data, and it does not rest on the assumption that the data come from a larger population.

Statistical assumption

Statistics, like all mathematical disciplines, does not infer valid conclusions from nothing. Inferring interesting conclusions about real statistical populations almost always requires some background assumptions. Those assumptions must be made carefully, because incorrect assumptions can generate wildly inaccurate conclusions. Here are some examples of statistical assumptions: Independence of observations from each other (this assumption is an especially common error). Independence of observational error from potential confounding effects.

Statistical theory

The theory of statistics provides a basis for the whole range of techniques, in both study design and data analysis, that are used within applications of statistics. The theory covers approaches to statistical-decision problems and to statistical inference, and the actions and deductions that satisfy the basic principles stated for these different approaches. Within a given approach, statistical theory gives ways of comparing statistical procedures; it can find a best possible procedure within a given context for given statistical problems, or can provide guidance on the choice between alternative procedures.

Probabilistic Models for Linear RegressionPHYS-467: Machine learning for physicists

Covers the probabilistic model for linear regression and its applications in nuclear magnetic resonance and X-ray imaging.

The Stein Phenomenon and SuperefficiencyMATH-442: Statistical theory

Explores the Stein Phenomenon, showcasing the benefits of bias in high-dimensional statistics and the superiority of the James-Stein Estimator over the Maximum Likelihood Estimator.

Implicit Generative Models

Explores implicit generative models, covering topics like method of moments, kernel choice, and robustness of estimators.

Statistical Theory: Maximum Likelihood EstimationMATH-442: Statistical theory

Explores the consistency and asymptotic properties of the Maximum Likelihood Estimator, including challenges in proving its consistency and constructing MLE-like estimators.

Density of States and Bayesian Inference in Computational Mathematics

Explores computing density of states and Bayesian inference using importance sampling, showcasing lower variance and parallelizability of the proposed method.