**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Category# Stochastic processses

Summary

In probability theory and related fields, a stochastic (stəˈkæstɪk) or random process is a mathematical object usually defined as a sequence of random variables, where the index of the sequence has the interpretation of time. Stochastic processes are widely used as mathematical models of systems and phenomena that appear to vary in a random manner. Examples include the growth of a bacterial population, an electrical current fluctuating due to thermal noise, or the movement of a gas molecule. Stochastic processes have applications in many disciplines such as biology, chemistry, ecology, neuroscience, physics, , signal processing, control theory, information theory, computer science, and telecommunications. Furthermore, seemingly random changes in financial markets have motivated the extensive use of stochastic processes in finance.
Applications and the study of phenomena have in turn inspired the proposal of new stochastic processes. Examples of such stochastic processes include the Wiener process or Brownian motion process, used by Louis Bachelier to study price changes on the Paris Bourse, and the Poisson process, used by A. K. Erlang to study the number of phone calls occurring in a certain period of time. These two stochastic processes are considered the most important and central in the theory of stochastic processes, and were discovered repeatedly and independently, both before and after Bachelier and Erlang, in different settings and countries.
The term random function is also used to refer to a stochastic or random process, because a stochastic process can also be interpreted as a random element in a function space. The terms stochastic process and random process are used interchangeably, often with no specific mathematical space for the set that indexes the random variables. But often these two terms are used when the random variables are indexed by the integers or an interval of the real line. If the random variables are indexed by the Cartesian plane or some higher-dimensional Euclidean space, then the collection of random variables is usually called a random field instead.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts (31)

Related courses (1)

Related publications (13)

Related lectures (2)

Related categories (60)

Lévy process

In probability theory, a Lévy process, named after the French mathematician Paul Lévy, is a stochastic process with independent, stationary increments: it represents the motion of a point whose successive displacements are random, in which displacements in pairwise disjoint time intervals are independent, and displacements in different time intervals of the same length have identical probability distributions. A Lévy process may thus be viewed as the continuous-time analog of a random walk.

Stopping time

In probability theory, in particular in the study of stochastic processes, a stopping time (also Markov time, Markov moment, optional stopping time or optional time) is a specific type of “random time”: a random variable whose value is interpreted as the time at which a given stochastic process exhibits a certain behavior of interest. A stopping time is often defined by a stopping rule, a mechanism for deciding whether to continue or stop a process on the basis of the present position and past events, and which will almost always lead to a decision to stop at some finite time.

Infinitesimal generator (stochastic processes)

In mathematics — specifically, in stochastic analysis — the infinitesimal generator of a Feller process (i.e. a continuous-time Markov process satisfying certain regularity conditions) is a Fourier multiplier operator that encodes a great deal of information about the process. The generator is used in evolution equations such as the Kolmogorov backward equation, which describes the evolution of statistics of the process; its L2 Hermitian adjoint is used in evolution equations such as the Fokker–Planck equation, also known as Kolmogorov forward equation, which describes the evolution of the probability density functions of the process.

MATH-330: Martingales et mouvement brownien

Introduction à la théorie des martingales à temps discret, en particulier aux théorèmes de convergence et d'arrêt. Application aux processus de branchement. Introduction au mouvement brownien et étude

Martingales and Brownian Motion: Drift and Exit TimeMATH-330: Martingales et mouvement brownien

Explores Brownian motion with drift, exit probabilities, and average exit times from intervals.

Martingales and Brownian Motion: Three Stopping TheoremsMATH-330: Martingales et mouvement brownien

Explores three stopping theorems in martingales and Brownian motion.

Mathematical statistics

Mathematical statistics is the application of probability theory, a branch of mathematics, to statistics, as opposed to techniques for collecting statistical data. Specific mathematical techniques which are used for this include mathematical analysis, linear algebra, stochastic analysis, differential equations, and measure theory. Statistical data collection is concerned with the planning of studies, especially with the design of randomized experiments and with the planning of surveys using random sampling.

Differential anaysis

In mathematics, a differential equation is an equation that relates one or more unknown functions and their derivatives. In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, and the differential equation defines a relationship between the two. Such relations are common; therefore, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology.

Bayesian inference

Bayesian inference (ˈbeɪziən or ˈbeɪʒən ) is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference is an important technique in statistics, and especially in mathematical statistics. Bayesian updating is particularly important in the dynamic analysis of a sequence of data. Bayesian inference has found application in a wide range of activities, including science, engineering, philosophy, medicine, sport, and law.

Both natural and manufactured surfaces have been found to be self-affine over many length scales. Yet, it is still not understood what mechanisms lead to the final self-affine morphology, which is characterized by a persistent Hurst exponent (greater than 0.5, see next section 2.1 for the definitions). The scope of this project is then to develop new discrete growth models that can help gather insights in the formation of self-affine surfaces in frictional processes.

2020We study the optimal strategy for a sailboat to reach an upwind island under the hypothesis that the wind direction fluctuates according to a Brownian motion and the wind speed is constant. The work is motivated by a concrete problem which typically arises during sailing regattas, namely finding the best tacking strategy to reach the upwind buoy as quickly as possible. We assume that there is no loss of time when tacking. We first guess an optimal strategy and then we establish its optimality by using the dynamic programming principle. The Hamilton Jacobi Bellmann equation obtained is a parabolic PDE with Neumann boundary conditions. Since it does not admit a closed form solution, the proof of optimality involves an intricate estimate of derivatives of the value function. We explicitly provide the asymptotic shape of the value function. In order to do so, we prove a result on large time behavior for solutions to time dependent parabolic PDE using a coupling argument. In particular, a boat far from the island approaches the island at $\frac{1}{2} + \frac{\sqrt{2}}{\pi} = 95.02\%$ of the boat's speed.

In this paper, we work on a class of self-interacting nearest neighbor random walks, introduced in [Probab. Theory Related Fields 154 (2012) 149-163], for which there is competition between repulsion of neighboring edges and attraction of next-to-neighboring edges. Erschler, Toth and Werner proved in [Probab. Theory Related Fields 154 (2012) 149-163] that, for any L >= 1, if the parameter alpha belongs to a certain interval (alpha(L+1), alpha(L)), then such random walks localize on L + 2 sites with positive probability. They also conjectured that this is the almost sure behavior. We prove this conjecture partially, stating that the walk localizes on L + 2 or L + 3 sites almost surely, under the same assumptions. We also prove that, if alpha is an element of (1,+infinity) = (alpha(2), alpha(1)), then the walk localizes a.s. on 3 sites.