Concept# Probability mass function

Summary

In probability and statistics, a probability mass function is a function that gives the probability that a discrete random variable is exactly equal to some value. Sometimes it is also known as the discrete probability density function. The probability mass function is often the primary means of defining a discrete probability distribution, and such functions exist for either scalar or multivariate random variables whose domain is discrete.
A probability mass function differs from a probability density function (PDF) in that the latter is associated with continuous rather than discrete random variables. A PDF must be integrated over an interval to yield a probability.
The value of the random variable having the largest probability mass is called the mode.
Formal definition
Probability mass function is the probability distribution of a discrete random variable, and provides the possible values and their associated probabilities. It is the function p: \R \to [0,1]

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related people (1)

Related courses (38)

FIN-415: Probability and stochastic calculus

This course gives an introduction to probability theory and stochastic calculus in discrete and continuous time. We study fundamental notions and techniques necessary for applications in finance such as option pricing, hedging, optimal portfolio choice and prediction problems.

MATH-233: Probability and statistics

The course gives an introduction to probability and statistics for physicists.

MATH-234(b): Probability and statistics

Le cours présente les notions de base de la théorie des probabilités et de l'inférence statistique. L'accent est mis sur les concepts principaux ainsi que les méthodes les plus utilisées.

Related units (1)

Related lectures (120)

Related publications (17)

Loading

Loading

Loading

Related concepts (23)

Probability distribution

In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mat

Probability theory

Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the conc

Random variable

A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. The term 'random va

In this thesis, we study systems of linear and/or non-linear stochastic heat equations and fractional heat equations in spatial dimension $1$ driven by space-time white noise. The main topic is the study of hitting probabilities for the solutions to these systems.
We first study the properties of the probability density functions of the solution to non-linear systems of stochastic fractional heat equations driven by multiplicative space-time white noise. Using the techniques of Malliavin calculus, we prove that the one-point probability density function of the solution is infinitely differentiable, uniformly bounded and positive everywhere. Moreover, a Gaussian-type upper bound on the two-point probability density function is obtained by a detailed analysis of the small eigenvalues of the Malliavin matrix. We establish an optimal lower bound on hitting probabilities for the (non-Gaussian) solution, which is as sharp as that for the Gaussian solution to a system of linear equations.
We develop a new method to study the upper bound on hitting probabilities, from the perspective of probability density functions. For the solution to the linear stochastic heat equation, we prove that the random vector, which consists of the solution and the supremum of a linear increment of the solution over a time segment, has an infinitely differentiable probability density function. We derive a formula for this density and establish a Gaussian-type upper bound. The smoothness property and Gaussian-type upper bound for the density of the supremum of the solution over a space-time rectangle touching the $t = 0$ axis are also studied. Furthermore, we extend these results to the solutions of systems of linear stochastic fractional heat equations.
For a system of linear stochastic heat equations with Dirichlet boundary conditions, we present a sufficient condition for certain sets to be hit with probability one.

,

We study the regularity of the probability density function of the supremum of the solution to the linear stochastic heat equation. Using a general criterion for the smoothness of densities for locally nondegenerate random variables, we establish the smoothness of the joint density of the random vector whose components are the solution and the supremum of an increment in time of the solution over an interval (at a fixed spatial position), and the smoothness of the density of the supremum of the solution over a space-time rectangle that touches thet=0 axis. Applying the properties of the divergence operator, we establish a Gaussian-type upper bound on these two densities respectively, which presents a close connection with the Holder-continuity properties of the solution.

The subject of the present thesis is an optimal prediction problem concerning the ultimate maximum of a stable Lévy process over a finite interval of time. Such "optimal prediction" problems are of both theoretical and practical interest, in particular they have applications in finance. For instance, suppose that an investor has a long position in one financial asset, whose price is modelled by some stochastic process. The investor's objective is to determine a "best moment" at which to close out the position and to sell the asset at the highest possible price. This optimal decision must be based on continuous observations of the asset price performance and only on the information accumulated to date. Hence, the investor should use a prediction (forecasting) of the future evolution of the price of the financial security. We examine this problem in the case where the asset price is modelled by a Lévy process. Indeed, during the last several years, the application of Lévy processes in the modelling financial asset returns has become one of the active research directions in quantitative finance. Thus, this thesis contains suitable new results concerning Lévy processes. We derive the law of the supremum process associated with a strictly stable Lévy process with no negative jumps which is not a subordinator. We note that the latter problem dates back to 1973. In particular, we show that the probability density function of the supremum process can be expressed using an explicit power series representation or via an integral representation. We also derive the infinitesimal generator of the reflected process associated with a general strictly stable Lévy process. Throughout this thesis, we apply the theory of optimal stopping, the methods of fractional differential calculus, and some results from fluctuation theory. Implementing these theories in the context of Lévy processes requires the development of specific analytical results. In the case where the asset price is modelled by a spectrally positive stable Lévy process, we describe the optimal strategy under certain conditions on the model parameters. The optimal strategy is of the following form: the investor must stop the observation of the price process and sell the asset as soon as the associated reflected process crosses for the first time a particular stopping boundary. We also provide numerical estimates and simulation examples of the results obtained by using this strategy.