**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Exponential distribution

Summary

In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the time between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.
The exponential distribution is not the same as the class of exponential families of distributions. This is a large class of probability distributions that includes the exponential distribution as one of its members, but also includes many other distributions, like the normal, binomial, gamma, and Poisson distributions.
Definitions
Probability density function
The probability density function (pdf) of an exponential

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related people (11)

Related publications (100)

Loading

Loading

Loading

Related courses (61)

MATH-442: Statistical theory

The course aims at developing certain key aspects of the theory of statistics, providing a common general framework for statistical methodology. While the main emphasis will be on the mathematical aspects of statistics, an effort will be made to balance rigor and intuition.

COM-406: Foundations of Data Science

We discuss a set of topics that are important for the understanding of modern data science but that are typically not taught in an introductory ML course. In particular we discuss fundamental ideas and techniques that come from probability, information theory as well as signal processing.

PHYS-403: Computer simulation of physical systems I

The two main topics covered by this course are classical molecular dynamics and the Monte Carlo method.

Related units (9)

Related concepts (79)

Normal distribution

In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function

Probability distribution

In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mat

Gamma distribution

In probability theory and statistics, the gamma distribution is a two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared di

In this thesis, we study the stochastic heat equation (SHE) on bounded domains and on the whole Euclidean space $\R^d.$ We confirm the intuition that as the bounded domain increases to the whole space, both solutions become arbitrarily close to one another. Both vanishing Dirichlet and Neumann boundary conditions are considered.We first study the nonlinear SHE in any space dimension with multiplicative correlated noise and bounded initial data. We prove that the solutions to SHE on an increasing sequence of domains converge exponentially fast to the solution to SHE on $\R^d.$ Uniform convergence on compact set is obtained for all $p$-moments. The conditions that need to be imposed on the noise are the same as those required to ensure existence of a random field solution. A Gronwall-type iteration argument is used together with uniform bounds on the solutions, which are surprisingly valid for the entire sequence of increasing domains.We then study SHE in space dimension $d\ge 2$ with additive white noise and bounded initial data. Even though both solutions need to be considered as distributions, their difference is proved to be smooth. If fact, the order of smoothness depends only on the regularity of the boundary of the increasing sequence of domains. We prove that the Fourier transform, in the sense of distributions, of the solution to SHE on $\R^d$ do not have any locally mean-square integrable representative. Therefore, convergence is studied in local versions of Sobolev spaces. Again, exponential rate is obtained.Finally, we study the Anderson model for SHE with correlated noise and initial data given by a measure. We obtain a special expression for the second moment of the difference of the solution on $\R^d$ with that on a bounded domain. The contribution of the initial condition is made explicit. For example, exponentially fast convergence on compact sets is obtained for any initial condition with polynomial growth. More interestingly, from a given convergence rate, we can decide whether some initial data is admissible.

Beamforming is the technique that permits to attenuate or increase the sound coming in a particular direction, it is in other word a way to estimate the power coming from a particular direction. It is therefore also used as direction of arrival estimator (DOA), and thus, signal source localizing. For this project, I have tried to come up with a robust and efficient design that allows DOA estimation of source signals and attenuates the noise coming from any others directions than the steering vector in real time using MATLAB.

2015Shannon, in his landmark 1948 paper, developed a framework for characterizing the fundamental limits of information transmission. Among other results, he showed that reliable communication over a channel is possible at any rate below its capacity. In 2008, Arikan discovered polar codes; the only class of explicitly constructed low-complexity codes that achieve the capacity of any binary-input memoryless symmetric-output channel. Arikan's polar transform turns independent copies of a noisy channel into a collection of synthetic almost-noiseless and almost-useless channels. Polar codes are realized by sending data bits over the almost-noiseless channels and recovering them by using a low-complexity successive-cancellation (SC) decoder, at the receiver. In the first part of this thesis, we study polar codes for communications. When the underlying channel is an erasure channel, we show that almost all correlation coefficients between the erasure events of the synthetic channels decay rapidly. Hence, the sum of the erasure probabilities of the information-carrying channels is a tight estimate of the block-error probability of polar codes when used for communication over the erasure channel. We study SC list (SCL) decoding, a method for boosting the performance of short polar codes. We prove that the method has a numerically stable formulation in log-likelihood ratios. In hardware, this formulation increases the decoding throughput by 53% and reduces the decoder's size about 33%. We present empirical results on the trade-off between the length of the CRC and the performance gains in a CRC-aided version of the list decoder. We also make numerical comparisons of the performance of long polar codes under SC decoding with that of short polar codes under SCL decoding. Shannon's framework also quantifies the secrecy of communications. Wyner, in 1975, proposed a model for communications in the presence of an eavesdropper. It was shown that, at rates below the secrecy capacity, there exist reliable communication schemes in which the amount of information leaked to the eavesdropper decays exponentially in the block-length of the code. In the second part of this thesis, we study the rate of this decay. We derive the exact exponential decay rate of the ensemble-average of the information leaked to the eavesdropper in Wyner's model when a randomly constructed code is used for secure communications. For codes sampled from the ensemble of i.i.d. random codes, we show that the previously known lower bound to the exponent is exact. Our ensemble-optimal exponent for random constant-composition codes improves the lower bound extant in the literature. Finally, we show that random linear codes have the same secrecy power as i.i.d. random codes. The key to securing messages against an eavesdropper is to exploit the randomness of her communication channel so that the statistics of her observation resembles that of a pure noise process for any sent message. We study the effect of feedback on this approximation and show that it does not reduce the minimum entropy rate required to approximate a given process. However, we give examples where variable-length schemes achieve much larger exponents in this approximation in the presence of feedback than the exponents in systems without feedback. Upper-bounding the best exponent that block codes attain, we conclude that variable-length coding is necessary for achieving the improved exponents.

Related lectures (133)