**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Poisson point process

Summary

In probability, statistics and related fields, a Poisson point process is a type of random mathematical object that consists of points randomly located on a mathematical space with the essential feature that the points occur independently of one another. The Poisson point process is often called simply the Poisson process, but it is also called a Poisson random measure, Poisson random point field or Poisson point field. This point process has convenient mathematical properties, which has led to its being frequently defined in Euclidean space and used as a mathematical model for seemingly random processes in numerous disciplines such as astronomy, biology, ecology, geology, seismology, physics, economics, , and telecommunications.
The process is named after French mathematician Siméon Denis Poisson despite Poisson's never having studied the process. Its name derives from the fact that if a collection of random points in some space forms a Poisson process, then the number of points in

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related people (18)

Related publications (100)

Loading

Loading

Loading

Related courses (37)

FIN-415: Probability and stochastic calculus

This course gives an introduction to probability theory and stochastic calculus in discrete and continuous time. We study fundamental notions and techniques necessary for applications in finance such as option pricing, hedging, optimal portfolio choice and prediction problems.

MATH-414: Stochastic simulation

The student who follows this course will get acquainted with computational tools used to analyze systems with uncertainty arising in engineering, physics, chemistry, and economics. Focus will be on sampling methods as Monte Carlo, quasi Monte Carlo, Markov Chain Monte Carlo.

MATH-600: Optimization and simulation

Master state-of-the art methods in optimization with heuristics and simulation.
Work involves:

- reading the material beforehand
- class hours to discuss the material and solve problems
- homework

Related concepts (18)

Poisson distribution

In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time

Queueing theory

Queueing theory is the mathematical study of waiting lines, or queues. A queueing model is constructed so that queue lengths and waiting time can be predicted. Queueing theory is g

Stochastic process

In probability theory and related fields, a stochastic (stəˈkæstɪk) or random process is a mathematical object usually defined as a sequence of random variables, where the index of the sequence has

Related units (11)

Rodolfo Javier Prieto Katunaric

This thesis consists of three chapters. The first chapter endogenizes technological change by introducing a stylized innovation process driven by a R&D–dependent Poisson process in a Cox, Ingersoll and Ross (1985) production economy. The model reproduces some of the features of the long-run risk literature, that is, endogenous consumption growth departs slightly from the i.i.d. framework, and shocks to realized and expected consumption growth are priced in equilibrium. Equilibrium dynamics suggest the use of general investment based measures, such as the R&D-to-capital ratio, to identify the long-run risk component in aggregate consumption growth. The second chapter examines the impact of risk-based portfolio constraints on asset prices in a standard exchange economy model where agents have different risk aversion. Constrained agents scale down their benchmark portfolio and behave locally like power utility investors with risk aversion that depends on current market conditions. The equilibrium is characterized using the consumption share of the constrained agents and allows for explicit existence results. The imposition of constraints on active market participants dampens fundamental shocks when they bind, a result that challenges recent studies that suggest that risk management rules serve to amplify aggregate fluctuations. The results also dispute the belief that capital regulations make financial crises larger and more costly, as constraints are more likely to bind in bad times. Constraints may give rise to equilibrium asset pricing bubbles, a result that is associated to the risk aversion distribution across agents and the severity of the constraint. The third chapter studies a pure exchange economy with three types of agents: constrained agents are subject to portfolio constraints, unconstrained agents are free to choose the composition of their portfolio and face a standard nonnegativity constraint on wealth, and arbitrageurs, who are unconstrained, and may incur in marked-to-market losses which are bounded by a state-dependent constraint. This credit condition is valuable when there are asset pricing bubbles, which arise endogenously because of the presence of constrained agents. The size of bubbles depends, primarily, on the ability of the arbitrageur to exploit the arbitrages available due to the mispriced assets, and hence, on the credit conditions. The bubble on the stock vanishes in the limit of infinite credit. Even though all agents have logarithmic utility, the stock price and volatility display the "leverage effect". The impact of the form of the wealth constraint on the results is also assessed.

We propose and prove a theorem that allows the calculation of a class of functionals on Poisson point processes that have the form of expected values of sum-products of functions. In proving the theorem, we present a variant of the Campbell-Mecke theorem from stochastic geometry. We proceed to apply our result in the calculation of expected values involving interference in wireless Poisson networks. Based on this, we derive outage probabilities for transmissions in a Poisson network with Nakagami fading. Our results extend the stochastic geometry toolbox used for the mathematical analysis of interference-limited wireless networks.

This thesis studies statistical inference in the high energy physics unfolding problem, which is an ill-posed inverse problem arising in data analysis at the Large Hadron Collider (LHC) at CERN. Any measurement made at the LHC is smeared by the finite resolution of the particle detectors and the goal in unfolding is to use these smeared measurements to make nonparametric inferences about the underlying particle spectrum. Mathematically the problem consists in inferring the intensity function of an indirectly observed Poisson point process. Rigorous uncertainty quantification of the unfolded spectrum is of central importance to particle physicists. The problem is typically solved by first forming a regularized point estimator in the unfolded space and then using the variability of this estimator to form frequentist confidence intervals. Such confidence intervals, however, underestimate the uncertainty, since they neglect the bias that is used to regularize the problem. We demonstrate that, as a result, conventional statistical techniques as well as the methods that are presently used at the LHC yield confidence intervals which may suffer from severe undercoverage in realistic unfolding scenarios. We propose two complementary ways of addressing this issue. The first approach applies to situations where the unfolded spectrum is expected to be a smooth function and consists in using an iterative bias-correction technique for debiasing the unfolded point estimator obtained using a roughness penalty. We demonstrate that basing the uncertainties on the variability of the bias-corrected point estimator provides significantly improved coverage with only a modest increase in the length of the confidence intervals, even when the amount of bias-correction is chosen in a data-driven way. We compare the iterative bias-correction to an alternative debiasing technique based on undersmoothing and find that, in several situations, bias-correction provides shorter confidence intervals than undersmoothing. The new methodology is applied to unfolding the Z boson invariant mass spectrum measured in the CMS experiment at the LHC. The second approach exploits the fact that a significant portion of LHC particle spectra are known to have a steeply falling shape. A physically justified way of regularizing such spectra is to impose shape constraints in the form of positivity, monotonicity and convexity. Moreover, when the shape constraints are applied to an unfolded confidence set, one can regularize the length of the confidence intervals without sacrificing coverage. More specifically, we form shape-constrained confidence intervals by considering all those spectra that satisfy the shape constraints and fit the smeared data within a given confidence level. This enables us to derive regularized unfolded uncertainties which have by construction guaranteed simultaneous finite-sample coverage, provided that the true spectrum satisfies the shape constraints. The uncertainties are conservative, but still usefully tight. The method is demonstrated using simulations designed to mimic unfolding the inclusive jet transverse momentum spectrum at the LHC.

Related lectures (103)