**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Mathematical finance

Summary

Mathematical finance, also known as quantitative finance and financial mathematics, is a field of applied mathematics, concerned with mathematical modeling of financial markets.
In general, there exist two separate branches of finance that require advanced quantitative techniques: derivatives pricing on the one hand, and risk and portfolio management on the other.
Mathematical finance overlaps heavily with the fields of computational finance and financial engineering. The latter focuses on applications and modeling, often by help of stochastic asset models, while the former focuses, in addition to analysis, on building tools of implementation for the models.
Also related is quantitative investing, which relies on statistical and numerical models (and lately machine learning) as opposed to traditional fundamental analysis when managing portfolios.
French mathematician Louis Bachelier's doctoral thesis, defended in 1900, is considered the first scholarly work on mathematical finance.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related publications (24)

Related units

No results

Loading

Loading

Loading

Related courses (27)

FIN-472: Computational finance

Participants of this course will master computational techniques frequently used in mathematical finance applications. Emphasis will be put on the implementation and practical aspects.

MATH-470: Martingales in financial mathematics

The aim of the course is to apply the theory of martingales in the context of mathematical finance. The course provides a detailed study of the mathematical ideas that are used in modern financial mathematics. Moreover, the concepts of complete and incomplete markets are discussed.

FIN-404: Derivatives

The objective of this course is to provide a detailed coverage of the standard models for the valuation and hedging of derivatives products such as European options, American options, forward contracts, futures contract and exotic options.

Related concepts (85)

Finance

Finance is the study and discipline of money, currency and capital assets. It is related to, but not synonymous with economics, which is the study of production, distribution, and consumption of money

Black–Scholes model

The Black–Scholes ˌblæk_ˈʃoʊlz or Black–Scholes–Merton model is a mathematical model for the dynamics of a financial market containing derivative investment instruments, using various underlying assu

Option (finance)

In finance, an option is a contract which conveys to its owner, the holder, the right, but not the obligation, to buy or sell a specific quantity of an underlying asset or instrument at a specified s

Related people

No results

Options are some of the most traded financial instruments and computing their price is a central task in financial mathematics and in practice. Consequently, the development of numerical algorithms for pricing options is an active field of research. In general, evaluating the price of a specific option relies on the properties of the stochastic model used for the underlying asset price. In this thesis we develop efficient and accurate numerical methods for option pricing in a specific class of models: polynomial models. They are a versatile tool for financial modeling and have useful properties that can be exploited for option pricing.
Significant challenges arise when developing option pricing techniques. For instance, the underlying model might have a high-dimensional parameter space. Furthermore, treating multi-asset options yields high-dimensional pricing problems. Therefore, the pricing method should be able to handle high dimensionality. Another important aspect is the efficiency of the algorithm: in real-world applications, option prices need to be delivered within short periods of time, making the algorithmic complexity a potential bottleneck. In this thesis, we address these challenges by developing option pricing techniques that are able to handle low and high-dimensional problems, and we propose complexity reduction techniques.
The thesis consists of four parts:
First, we present a methodology for European and American option pricing. The method uses the moments of the underlying price process to produce monotone sequences of lower and upper bounds of the option price. The bounds are obtained by solving a sequence of polynomial optimization problems. As the order of the moments increases, the bounds become sharper and eventually converge to the exact price under appropriate assumptions.
Second, we develop a fast algorithm for the incremental computation of nested block triangular matrix exponentials. This algorithm allows for an efficient incremental computation of the moment sequence of polynomial jump-diffusions. In other words, moments of order 0, 1, 2, 3... are computed sequentially until a dynamically evaluated criterion tells us to stop. The algorithm is based on the scaling and squaring technique and reduces the complexity of the pricing algorithms that require such an incremental moment computation.
Third, we develop a complexity reduction technique for high-dimensional option pricing. To this end, we first consider the option price as a function of model and payoff parameters. Then, the tensorized Chebyshev interpolation is used on the parameter space to increase the efficiency in computing option prices, while maintaining the required accuracy. The high dimensionality of the problem is treated by expressing the tensorized interpolation in the tensor train format and by deriving an efficient way, which is based on tensor completion, to approximate the interpolation coefficients.
Lastly, we propose a methodology for pricing single and multi-asset European options. The approach is a combination of Monte Carlo simulation and function approximation. We address the memory limitations that arise when treating very high-dimensional applications by combining the method with optimal sampling strategies and using a randomized algorithm to reduce the storage complexity of the approach.
The obtained numerical results show the effectiveness of the algorithms developed in this thesis.

Powerful mathematical tools have been developed for trading in stocks and bonds, but other markets that are equally important for the globalized world have to some extent been neglected. We decided to study the shipping market as an new area of development in mathematical finance. The market in shipping derivatives (FFA and FOSVA) has only been developed after 2000 and now exhibits impressive growth. Financial actors have entered the field, but it is still largely undiscovered by institutional investors. The first part of the work was to identify the characteristics of the market in shipping, i.e. the segmentation and the volatility. Because the shipping business is old-fashioned, even the leading actors on the world stage (ship owners and banks) are using macro-economic models to forecast the rates. If the macro-economic models are logical and make sense, they fail to predict. For example, the factor port congestion has been much cited during the last few years, but it is clearly very difficult to control and is simply an indicator of traffic. From our own experience it appears that most ship owners are in fact market driven and rather bad at anticipating trends. Due to their ability to capture large moves, we chose to consider Lévy processes for the underlying price process. Compared with the macro-economic approach, the main advantage is the uniform and systematic structure this imposed on the models. We get in each case a favorable result for our technology and a gain in forecasting accuracy of around 10% depending on the maturity. The global distribution is more effectively modelled and the tails of the distribution are particularly well represented. This model can be used to forecast the market but also to evaluate the risk, for example, by computing the VaR. An important limitation is the non-robustness in the estimation of the Lévy processes. The use of robust estimators reinforces the information obtained from the observed data. Because maximum likelihood estimation is not easy to compute with complex processes, we only consider some very general robust score functions to manage the technical problems. Two new class of robust estimators are suggested. These are based on the work of F. Hampel ([29]) and P. Huber ([30]) using influence functions. The main idea is to bound the maximum likelihood score function. By doing this a bias is created in the parameters estimation, which can be corrected by using a modification of the following type and as proposed by F. Hampel. The procedure for finding a robust estimating equation is thus decomposed into two consecutive steps : Subtract the bias correction and then Bound the score function. In the case of complex Lévy processes, the bias correction is difficult to compute and generally unknown. We have developed a pragmatic solution by inverting the Hampel's procedure. Bound the score function and then Correct for the bias. The price is a loss of the theoretical properties of our estimators, besides the procedure converges to maximum likelihood estimate. A second solution to for achieving robust estimation is presented. It considers the limiting case when the upper and lower bounds tend to zero and leads to B-robust estimators. Because of the complexity of the Lévy distributions, this leads to identification problems.

Related lectures (51)

From medical support to education and remote work, our everyday lives increasingly depend on Internet performance. When users experience poor performance, however, the decentralization of the Internet allows limited visibility into which network is responsible. As a result, users are promised Service Level Agreements (SLAs) they cannot verify, regulators make rules they cannot enforce, and networks with competitive performance cannot reliably showcase it to attract new customers. To change this, researchers have proposed transparency protocols, which rely on networks reporting on their own performance. However, these proposals would be hard to adopt because i) they require substantial network resources for extracting and publishing the performance information, or ii) they require cooperative networks that honestly report their performance against their self-interests, or iii) they threaten the anonymizing capability of Tor-like networks by violating their limited visibility assumptions and introducing a new attack vector against them.This dissertation enables network users to estimate the loss and delay of individual networks in an efficient and accurate manner, despite networks generating and controlling the performance data and potentially wanting to exaggerate their performance. It also proposes the first transparency protocol that tries to preserve the capabilities of anonymity networks.The key to efficient and accurate performance monitoring is i) creating incentives for networks to be honest by causing dishonest networks to get into conflict with their neighbors, and ii) combining these incentives with mathematical tools that "tie together" different aspects of network performance.The key to anonymity-preserving monitoring is the insight that users can benefit from transparency even when networks expose coarser-than-per-packet performance information, which at the same time hides sensitive communication patterns and improves anonymity.Our thesis is that efficient and accurate Internet performance transparency is possible and that we can ease the tussle between transparency and user anonymity.