The theory of statistics provides a basis for the whole range of techniques, in both study design and data analysis, that are used within applications of statistics. The theory covers approaches to statistical-decision problems and to statistical inference, and the actions and deductions that satisfy the basic principles stated for these different approaches. Within a given approach, statistical theory gives ways of comparing statistical procedures; it can find a best possible procedure within a given context for given statistical problems, or can provide guidance on the choice between alternative procedures.
Apart from philosophical considerations about how to make statistical inferences and decisions, much of statistical theory consists of mathematical statistics, and is closely linked to probability theory, to utility theory, and to optimization.
Statistical theory provides an underlying rationale and provides a consistent basis for the choice of methodology used in applied statistics.
Statistical models describe the sources of data and can have different types of formulation corresponding to these sources and to the problem being studied. Such problems can be of various kinds:
Sampling from a finite population
Measuring observational error and refining procedures
Studying statistical relations
Statistical models, once specified, can be tested to see whether they provide useful inferences for new data sets.
Statistical theory provides a guide to comparing methods of data collection, where the problem is to generate informative data using optimization and randomization while measuring and controlling for observational error. Optimization of data collection reduces the cost of data while satisfying statistical goals, while randomization allows reliable inferences. Statistical theory provides a basis for good data collection and the structuring of investigations in the topics of:
Design of experiments to estimate treatment effects, to test hypotheses, and to optimize responses.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Statistics, like all mathematical disciplines, does not infer valid conclusions from nothing. Inferring interesting conclusions about real statistical populations almost always requires some background assumptions. Those assumptions must be made carefully, because incorrect assumptions can generate wildly inaccurate conclusions. Here are some examples of statistical assumptions: Independence of observations from each other (this assumption is an especially common error). Independence of observational error from potential confounding effects.
Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured empirical data that has a random component. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the measured data. An estimator attempts to approximate the unknown parameters using the measurements.
In statistics, quality assurance, and survey methodology, sampling is the selection of a subset or a statistical sample (termed sample for short) of individuals from within a statistical population to estimate characteristics of the whole population. Statisticians attempt to collect samples that are representative of the population. Sampling has lower costs and faster data collection compared to recording data from the entire population, and thus, it can provide insights in cases where it is infeasible to measure an entire population.
Learn the basics of plasma, one of the fundamental states of matter, and the different types of models used to describe it, including fluid and kinetic.
Learn the basics of plasma, one of the fundamental states of matter, and the different types of models used to describe it, including fluid and kinetic.
This course covers statistical methods that are widely used in medicine and biology. A key topic is the analysis of longitudinal data: that is, methods to evaluate exposures, effects and outcomes that
This course covers formal frameworks for causal inference. We focus on experimental designs, definitions of causal models, interpretation of causal parameters and estimation of causal effects.
Explores the consistency and asymptotic properties of the Maximum Likelihood Estimator, including challenges in proving its consistency and constructing MLE-like estimators.
Explores the Stein Phenomenon, showcasing the benefits of bias in high-dimensional statistics and the superiority of the James-Stein Estimator over the Maximum Likelihood Estimator.
Thin-laminate composites with thicknesses below 200 mu m hold significant promise for future, larger, and lighter deployable structures. This paper presents a study of the time-dependent failure behavior of thin carbon-fiber laminates under bending, focusi ...
We summarize what we consider to be the two main limitations of the "Estimands for Recurrent Event Endpoints in the Presence of a Terminal Event" (Schmidli et al. 2022). First, the authors did not give detailed guidance on how to choose an appropriate esti ...
In inverse problems, the task is to reconstruct an unknown signal from its possibly noise-corrupted measurements. Penalized-likelihood-based estimation and Bayesian estimation are two powerful statistical paradigms for the resolution of such problems. They ...