**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Log-normal distribution

Summary

In probability theory, a log-normal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is log-normally distributed, then Y = ln(X) has a normal distribution. Equivalently, if Y has a normal distribution, then the exponential function of Y, X = exp(Y), has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values. It is a convenient and useful model for measurements in exact and engineering sciences, as well as medicine, economics and other topics (e.g., energies, concentrations, lengths, prices of financial instruments, and other metrics).
The distribution is occasionally referred to as the Galton distribution or Galton's distribution, after Francis Galton. The log-normal distribution has also been associated with other names, such as McAlister, Gibrat and Cobb–Douglas

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related people (3)

Related publications (58)

Loading

Loading

Loading

Related concepts (44)

Normal distribution

In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function

Pareto distribution

The Pareto distribution, named after the Italian civil engineer, economist, and sociologist Vilfredo Pareto, is a power-law probability distribution that is used in description of social, quality co

Probability distribution

In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mat

Related lectures (138)

Related courses (53)

MATH-131: Probability and statistics

Le cours présente les notions de base de la théorie des probabilités et de l'inférence statistique. L'accent est mis sur les concepts principaux ainsi que les méthodes les plus utilisées.

FIN-415: Probability and stochastic calculus

This course gives an introduction to probability theory and stochastic calculus in discrete and continuous time. We study fundamental notions and techniques necessary for applications in finance such as option pricing, hedging, optimal portfolio choice and prediction problems.

ENV-400: Air pollution and climate change

A survey course describing the origins of air pollution and climate change

Related units (2)

The reliability of new overhead electric and telecommunication lines depends principally on the quality of their support structures. These structures are generally made of wood, metal or concrete. The complexity of a natural substance such as wood requires a thorough analysis of the various factors that influence its overall quality. In the case of wood poles, such factors include initial forest growth pattern, the species of wood and its preservative treatment, ageing characteristics, and its various mechanical defects such as knots, cracks etc. The accumulation of knowledge on the effect of the various variables that contribute to the overall quality of a wood support structure permits an optimum use of such a resource. For example, less variability and higher strength of wood support structures permits optimum loading and spacing between structures, thus reducing the number needed in a specific length of an overhead line. If one assumes that in Western Europe 1 wood pole is employed for every 2 inhabitants, and that this proportion increases in less densely populated countries such as the US and Scandinavia, the economics of optimum use of wood as a resource soon become apparent. In less developed countries, the proportions and the economics vary depending on the natural resources such as wood that they employ. The goal of this research is to establish, thanks to non destructive evaluations, a general ageing probabilistic law of the wooden pole based on two distinguished laws: one on the new pole in studying the influence of a grading of the bad elements based on a normal law: "left-truncation of a normal distribution", point 1; and another one based on the in-field wooden pole in exploiting the different parameters such as: the age of the pole, its chemical treatment, its species, its knots etc. in order to define the pole's damage law, point 2. Statistical distribution law of the new wooden pole after grading by non destructive sorting (ultrasounds) of the high mechanical performances supports: This new distribution law is a Gaussian law or evolves to a Log or Weibull's law with 3 parameters according to the inspected species. This grading allows a revalorization of the properties of the new poles and of the design values while guaranteeing an index of reliability required by the design standards, or in improving directly this nominal reliability (economic gain and reliability gain). Statistical distribution law of an aged in-field population (20-50 years old) approached by a bi-modal law which depends on: The distribution law of the new component (see point 1) and its minimal extreme law, which is asymmetrical, for an observation on 50 years. The statistical distribution at the time t of the residual mechanical performances of a group of supports making a local net, evaluated by non destructive methods. The non destructive evaluation is based on the measurements of physical variables (density, biological moisture content) and some descriptive variables from natural origins (diameter, knots, cracks...) and from accidental origins (diameter reduction, lightning cracks...). The statistical distribution at the time t is then obtained on the basis of a model of multivariate non destructive evaluation, generalized to the whole of species and treatments. This model is the other concrete goal to reach in this thesis. As a conclusion, the research demonstrates the influence and the interaction of the new pole grading (distribution at t0) on the modelisation of the distribution at ti (multivariate non destructive model). The data used for the mentioned modelisations come from a significant international database with a large amount of inspected wood poles and with studied cases. This database is the synthesis of about 15 years of research and development leaded by IBOIS-EPFL and its international partners. The probabilistic approaches are then validated by a huge database allowing thus to be directly exploitable. On this basis, all the standards dealing with the new poles and dealing with the controls and maintenances of a wooden pole networks, could be re-examined for a double gain: Concerning the economy: by increasing the capacity of the new poles profiting of an objective quality assurance, and by increasing the life time of the in-field pole, in knowing how to purge only the ones which are under the critical threshold of damage Concerning the reliability: by increasing the reliability of the network from the stage "new pole", by eliminating the weakest components, and by maintaining this reliability during all the life time of the network thanks to a cyclic preventive maintenance (every 5 to 8 years) and the replacement of only the weakened poles.

We obtain quantitative bounds on the mixing properties of the Hamiltonian Monte Carlo (HMC) algorithm with target distribution in d-dimensional Euclidean space, showing that HMC mixes quickly whenever the target log-distribution is strongly concave and has Lipschitz gradients. We use a coupling argument to show that the popular leapfrog implementation of HMC can sample approximately from the target distribution in a number of gradient evaluations which grows like d()1/2 with the dimension and grows at most polynomially in the strong convexity and Lipschitz-gradient constants. Our results significantly extend and improve on the dimension dependence of previous quantitative bounds on the mixing of HMC and of the unadjusted Langevin algorithm in this setting.

This thesis studies the valuation and hedging of financial derivatives, which is fundamental for trading and risk-management operations in financial institutions. The three chapters in this thesis deal with derivatives whose payoffs are linked to interest rates, equity prices, and dividend payments.
The first chapter introduces a flexible framework based on polynomial jump-diffusions (PJD) to jointly price the term structures of dividends and interest rates. Prices for dividend futures, bonds, and the dividend paying stock are given in closed form. Option prices are approximated efficiently using a moment matching technique based on the principle of maximum entropy. An extensive calibration exercise shows that a parsimonious model specification has a good fit with Euribor interest rate swaps and swaptions, Euro Stoxx 50 index dividend futures and dividend options, and Euro Stoxx 50 index options.
The second chapter revisits the problem of pricing a continuously sampled arithmetic Asian option in the classical Black-Scholes setting. An identity in law links the integrated stock price to a one-dimensional polynomial diffusion, a particular instance of the PJD encountered in the first chapter. The Asian option price is approximated by a series expansion based on polynomials that are orthogonal with respect to the log-normal distribution. All terms in the series are fully explicit and no numerical integration nor any special functions are involved. The moment indeterminacy of the log-normal distribution introduces an asymptotic bias in the series, however numerical experiments show that the bias can safely be ignored in practice.
The last chapter presents a non-parametric method to construct a maximally smooth discount curve from observed market prices of linear interest rate products such as swaps, forward rate agreements, or coupon bonds. The discount curve is given in closed form and only requires basic linear algebra operations. The method is illustrated with several practical examples.