In mathematical analysis, a null set is a Lebesgue measurable set of real numbers that has measure zero. This can be characterized as a set that can be covered by a countable union of intervals of arbitrarily small total length.The notion of null set should not be confused with the empty set as defined in set theory. Although the empty set has Lebesgue measure zero, there are also non-empty sets which are null. For example, any non-empty countable set of real numbers has Lebesgue measure zero and therefore is null.More generally, on a given measure space M = (X, \Sigma, \mu) a null set is a set S \in \Sigma such that \mu(S) = 0.ExamplesEvery finite or countably infinite subset of the real numbers is a null set. For example, the set of natural numbers and the set of rational numbers are both countably infinite and therefore are null sets when considered as subsets of the real numbers.The Cantor set is an example of an uncoun
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
The aim of this course is to provide a solid foundation of theory of distributions, Sobolev spaces and an introduction to the more general theory of interpolation spaces.
Participants of this course will master computational techniques frequently used in mathematical finance applications. Emphasis will be put on the implementation and practical aspects.
In mathematics, the concept of a measure is a generalization and formalization of geometrical measures (length, area, volume) and other common notions, such as magnitude, mass, and probability of ev
In measure theory, a branch of mathematics, the Lebesgue measure, named after French mathematician Henri Lebesgue, is the standard way of assigning a measure to subsets of n-dimensional Euclidean spa
In mathematics, the integral of a non-negative function of a single variable can be regarded, in the simplest case, as the area between the graph of that function and the X-axis. The L
In multiple testing problems where the components come from a mixture model of noise and true effect, we seek to first test for the existence of the non-zero components, and then identify the true alternatives under a fixed significance level α. Two parameters, namely the fraction of the non-null components ε and the size of the effects μ, characterise the two-point mixture model under the global alternative. When the number of hypotheses m goes to infinity, we are interested in an asymptotic framework where the fraction of the non-null components is vanishing, and the true effects need to be sizable to be detected. Donoho and Jin give an explicit form of the asymptotic detectable boundary based on the Gaussian mixture model under the classic calibration of the parameters of the mixture model. We prove the analogous results for the Cauchy mixture distribution as an example heavy-tailed case. This requires a different formulation of the parameters, which reflects the added difficulties.We also propose a multiple testing procedure based on a filtering approach that can discover the true alternatives.
Benjamini and Hochberg (BH) compare the observed p-values to a linear threshold curve and reject the null hypotheses from the minimum up to the last up-crossing, and prove the false discovery rate (FDR) is controlled.
However, there is an intrinsic difference in heavy-tailed settings. Were we to use the BH procedure we would get a highly variable positive false discovery rate (pFDR). In our study we analyse the distribution of the p-values and devise a new multiple testing procedure to combine the usual case and the heavy-tailed case based on the empirical properties of the p-values. The filtering approach is designed to eliminate most p-values that are more likely to be uniform, while preserving most of the true alternatives. Based on the filtered p-values, we estimate the mode ϑ and define the rejection region R(ϑ,δ)=[ϑ−δ/2,ϑ+δ/2] such that the most informative p-values are included. The length δ is chosen by controlling the data-dependent estimation of FDR at a desired level.