Info-gap decision theory seeks to optimize robustness to failure under severe uncertainty, in particular applying sensitivity analysis of the stability radius type to perturbations in the value of a given estimate of the parameter of interest. It has some connections with Wald's maximin model; some authors distinguish them, others consider them instances of the same principle.
It has been developed by Yakov Ben-Haim, and has found many applications and described as a theory for decision-making under "severe uncertainty". It has been criticized as unsuited for this purpose, and alternatives proposed, including such classical approaches as robust optimization.
Info-gap is a theory: it assists in decisions under uncertainty. It does this by using models, each built on the last. One begins with a model for the situation, where some parameter or parameters are unknown.
Then takes an estimate for the parameter, and one analyzes how sensitive the outcomes under the model are to the error in this estimate.
Uncertainty model Starting from the estimate, an uncertainty model measures how far away other values of the parameter are: as uncertainty increases, the set of values increase.
Robustness/opportuneness model Given an uncertainty model, then for each decision, how uncertain can you be and be confident succeeding? (robustness) Also, given a windfall, how uncertain must you be for this result to be plausible? (opportuneness)
Decision-making model One optimizes the robustness on the basis of the model. Given an outcome, which decision can stand the most uncertainty and give the outcome? Also, given a windfall, which decision requires the least uncertainty for the outcome?
Info-gap theory models uncertainty as subsets around a point estimate : the estimate is accurate, and uncertainty increases, in general without bound.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
In decision theory and game theory, Wald's maximin model is a non-probabilistic decision-making model according to which decisions are ranked on the basis of their worst-case outcomes – the optimal decision is one with the least bad worst outcome. It is one of the most important models in robust decision making in general and robust optimization in particular. It is also known by a variety of other titles, such as Wald's maximin rule, Wald's maximin principle, Wald's maximin paradigm, and Wald's maximin criterion.
In decision theory, on making decisions under uncertainty—should information about the best course of action arrive after taking a fixed decision—the human emotional response of regret is often experienced, and can be measured as the value of difference between a made decision and the optimal decision. The theory of regret aversion or anticipated regret proposes that when facing a decision, individuals might anticipate regret and thus incorporate in their choice their desire to eliminate or reduce this possibility.
Minmax (sometimes Minimax, MM or saddle point) is a decision rule used in artificial intelligence, decision theory, game theory, statistics, and philosophy for minimizing the possible loss for a worst case (maximum loss) scenario. When dealing with gains, it is referred to as "maximin" – to maximize the minimum gain. Originally formulated for several-player zero-sum game theory, covering both the cases where players take alternate moves and those where they make simultaneous moves, it has also been extended to more complex games and to general decision-making in the presence of uncertainty.
A course on statistical machine learning for supervised and unsupervised learning
This course is an introduction to linear and discrete optimization.Warning: This is a mathematics course! While much of the course will be algorithmic in nature, you will still need to be able to p
We study the problem of estimating an unknown function from noisy data using shallow ReLU neural networks. The estimators we study minimize the sum of squared data-fitting errors plus a regularization term proportional to the squared Euclidean norm of the ...
The presence of competing events, such as death, makes it challenging to define causal effects on recurrent outcomes. In this thesis, I formalize causal inference for recurrent events, with and without competing events. I define several causal estimands an ...
It is natural for humans to judge the outcome of a decision under uncertainty as a percentage of an ex-post optimal performance. We propose a robust decision-making framework based on a relative performance index. It is shown that if the decision maker's p ...