**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Variance

Summary

In probability theory and statistics, variance is the squared deviation from the mean of a random variable. The variance is also often defined as the square of the standard deviation. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. It is the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by \sigma^2, s^2, \operatorname{Var}(X), V(X), or \mathbb{V}(X).
An advantage of variance as a measure of dispersion is that it is more amenable to algebraic manipulation than other measures of dispersion such as the expected absolute deviation; for example, the variance of a sum of uncorrelated random variables is equal to the sum of their variances. A disadvantage of the variance for practical applications is that, unlike the standard deviation, its units differ from the random

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related people (2)

Related publications (69)

Loading

Loading

Loading

Related units (2)

Related courses (118)

BIO-322: Introduction to machine learning for bioengineers

Students understand basic concepts and methods of machine learning. They can describe them in mathematical terms and can apply them to data using a high-level programming language (julia/python/R).

MICRO-110: Design of experiments

This course provides an introduction to experimental statistics, including use of population statistics to characterize experimental results, use of comparison statistics and hypothesis testing to evaluate validity of experiments, and design, application, and analysis of multifactorial experiments

FIN-403: Econometrics

The course covers basic econometric models and methods that are routinely applied to obtain inference results in economic and financial applications.

Related concepts (154)

Statistics

Statistics (from German: Statistik, "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and present

Normal distribution

In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function

Probability distribution

In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mat

Introduction of optimisation problems in which the objective function is black box or obtaining the gradient is infeasible, has recently raised interest in zeroth-order optimisation methods. As an example finding adversarial examples for Deep Learning models (Chen et al. (2017); Moosavi-Dezfooli et al. (2016)) is one of the most common applications in which zeroth-order methods could be used. These optimisation methods use only function values at certain points to estimate the gradient. Most current approaches iteratively sample a random search direction along which they compute an estimation of the gradient (Nesterov and Spokoiny (2017); Conn et al. (2009); Wibisono et al. (2012)). However, due to the high variance in the search direction, these methods usually need d times more iterations than the standard gradient methods, where d is the dimensionality of the problem. So it seems that the main effort for improving the zeroth-order methods should be in reducing the variance of the gradient estimate. In this work we will analyse the gradient-free oracle which uses random directions sampled form a Gaussian distribution. Our analysis shows that in smooth and strongly convex setting, we have a convergence rate of O( d/T) which clearly shows the dependency to the dimension of the problem. Furthermore we propose some variance reduction methods to make the zeroth-order optimisation faster. We experiment our proposed methods in Python to compare their convergence in stochastic and non-stochastic setting. Our empirical results show that in a setting that number of allowed function evaluation is fixed, using a variance reduction method (e.g. momentum) can make the convergence of zeroth-order methods happen faster.

2019In the first chapter,which is a joint work with Mathieu Cambou and Philippe H.A. Charmoy, we study the distribution of the hedging errors of a European call option for the delta and variance-minimizing strategies. Considering the setting proposed by Heston (1993), we assess the error distribution by computing its moments under the real-world probability measure. It turns out that one is better off implementing either a delta hedging or a variance-minimizing strategy, depending on the strike and maturity of the option under consideration. In the second paper, which is a joint work with Damir Filipovic and Loriano Mancini, we develop a practicable continuous-time dynamic arbitrage-free model for the pricing of European contingent claims. Using the framework introduced by Carmona and Nadtochiy (2011, 2012), the stock price is modeled as a semi-martingale process and, at each time t , the marginal distribution of the European option prices is coded by an auxiliary process that starts at t and follows an exponential additive process. The jump intensity that characterizes these auxiliary processes is then set in motion by means of stochastic dynamics of Itô's type. The model is a modification of the one proposed by Carmona and Nadtochiy, as only finitely many jump sizes are assumed. This crucial assumption implies that the jump intensities are taken values in only a finitedimensional space. In this setup, explicit necessary and sufficient consistency conditions that guarantee the absence of arbitrage are provided. A practicable dynamic model verifying them is proposed and estimated, using options on the S&P 500. Finally, the hedging of variance swap contracts is considered. It is shown that under certain conditions, a variance-minimizing hedging portfolio gives lower hedging errors on average, compared to a model-free hedging strategy. In the third and last chapter, which is a joint work with Rémy Praz, we concentrate on the commodity markets and try to understand the impact of financiers on the hedging decisions. We look at the changes in the spot price, variance, production and hedging choices of both producers and financiers, when the mass of financiers in the economy increases. We develop an equilibrium model of commodity spot and futures markets in which commodity production, consumption, and speculation are endogenously determined. Financiers facilitate hedging by the commodity suppliers. The entry of new financiers thus increases the supply of the commodity and decreases the expected spot prices, to the benefits of the end-users. However, this entry may be detrimental to the producers, as they do not internalize the price reduction due to greater aggregate supply. In the presence of asymmetric information, speculation on the futures market serves as a learning device. The futures price and open interest reveal different pieces of private information regarding the supply and demand side of the spot market, respectively. When the accuracy of private information is low, the entry of new financiers makes both production and spot prices more volatile. The entry of new financiers typically increases the correlation between financial and commodity markets.

in finite sample studies redescending M-estimators outperform bounded M-estimators (see for example, Andrews et al. [1972. Robust Estimates of Location. Princeton University Press, Princeton]). Even though redescenders arise naturally out of the maximum likelihood approach if one uses very heavy-tailed models, the commonly used redescenders have been derived from purely heuristic considerations. Using a recent approach proposed by Shurygin, we study the optimality of redescending M-estimators. We show that redescending M-estimator can be designed by applying a global minimax criterion to locally robust estimators, namely maximizing over a class of densities the minimum variance sensitivity over a class of estimators. As a particular result, we prove that Smith's estimator, which is a compromise between Huber's skipped mean and Tukey's biweight, provides a guaranteed level of an estimator's variance sensitivity over the class of densities with a bounded variance. (C) 2007 Elsevier B.V. All rights reserved.

Related lectures (405)