**Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?**

Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur GraphSearch.

Concept# Mathématiques financières

Résumé

Les mathématiques financières (aussi nommées finance quantitative) sont une branche des mathématiques appliquées ayant pour but la modélisation, la quantification et la compréhension des phénomènes régissant les opérations financières d'une certaine durée (emprunts et placements / investissements) et notamment les marchés financiers. Elles font jouer le facteur temps et utilisent principalement des outils issus de l'actualisation, de la théorie des probabilités, du calcul stochastique, des statistiques et du calcul différentiel.
Antériorité
L'actualisation et l'utilisation des intérêts composés et des probabilités remontent à plusieurs siècles.
Cela dit Louis Bachelier, par sa thèse intitulée Théorie de la spéculation en 1900, est considéré comme le fondateur des mathématiques financières appliquées aux marchés. La théorie moderne des marchés financiers remonte au MEDAF et à l'étude du problème d'évaluation des options et autres contrats financiers dérivés dans les années

Source officielle

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Publications associées

Chargement

Personnes associées

Chargement

Unités associées

Chargement

Concepts associés

Chargement

Cours associés

Chargement

Séances de cours associées

Chargement

Personnes associées

Unités associées

Aucun résultat

Aucun résultat

Publications associées (24)

Chargement

Chargement

Chargement

Concepts associés (85)

Finance

La finance renvoie à un domaine d'activité , aujourd'hui mondialisé, qui consiste à fournir ou trouver l'argent ou les « produits financiers » nécessaire à la réalisation d'une opération économique. L

Modèle Black-Scholes

Le modèle de Black-Scholes est utilisé pour désigner deux concepts très proches :

- le modèle Black-Scholes ou modèle Black-Scholes-Merton qui est un modèle mathématique du marché pour une action, dan

Option

En finance, une option est un produit dérivé qui établit un contrat entre un acheteur et un vendeur. L'acheteur de l'option obtient le droit, et non pas l'obligation, d'acheter (call) ou de vendre (pu

Cours associés (27)

FIN-472: Computational finance

Participants of this course will master computational techniques frequently used in mathematical finance applications. Emphasis will be put on the implementation and practical aspects.

MATH-470: Martingales in financial mathematics

The aim of the course is to apply the theory of martingales in the context of mathematical finance. The course provides a detailed study of the mathematical ideas that are used in modern financial mathematics. Moreover, the concepts of complete and incomplete markets are discussed.

FIN-404: Derivatives

The objective of this course is to provide a detailed coverage of the standard models for the valuation and hedging of derivatives products such as European options, American options, forward contracts, futures contract and exotic options.

Séances de cours associées (51)

Powerful mathematical tools have been developed for trading in stocks and bonds, but other markets that are equally important for the globalized world have to some extent been neglected. We decided to study the shipping market as an new area of development in mathematical finance. The market in shipping derivatives (FFA and FOSVA) has only been developed after 2000 and now exhibits impressive growth. Financial actors have entered the field, but it is still largely undiscovered by institutional investors. The first part of the work was to identify the characteristics of the market in shipping, i.e. the segmentation and the volatility. Because the shipping business is old-fashioned, even the leading actors on the world stage (ship owners and banks) are using macro-economic models to forecast the rates. If the macro-economic models are logical and make sense, they fail to predict. For example, the factor port congestion has been much cited during the last few years, but it is clearly very difficult to control and is simply an indicator of traffic. From our own experience it appears that most ship owners are in fact market driven and rather bad at anticipating trends. Due to their ability to capture large moves, we chose to consider Lévy processes for the underlying price process. Compared with the macro-economic approach, the main advantage is the uniform and systematic structure this imposed on the models. We get in each case a favorable result for our technology and a gain in forecasting accuracy of around 10% depending on the maturity. The global distribution is more effectively modelled and the tails of the distribution are particularly well represented. This model can be used to forecast the market but also to evaluate the risk, for example, by computing the VaR. An important limitation is the non-robustness in the estimation of the Lévy processes. The use of robust estimators reinforces the information obtained from the observed data. Because maximum likelihood estimation is not easy to compute with complex processes, we only consider some very general robust score functions to manage the technical problems. Two new class of robust estimators are suggested. These are based on the work of F. Hampel ([29]) and P. Huber ([30]) using influence functions. The main idea is to bound the maximum likelihood score function. By doing this a bias is created in the parameters estimation, which can be corrected by using a modification of the following type and as proposed by F. Hampel. The procedure for finding a robust estimating equation is thus decomposed into two consecutive steps : Subtract the bias correction and then Bound the score function. In the case of complex Lévy processes, the bias correction is difficult to compute and generally unknown. We have developed a pragmatic solution by inverting the Hampel's procedure. Bound the score function and then Correct for the bias. The price is a loss of the theoretical properties of our estimators, besides the procedure converges to maximum likelihood estimate. A second solution to for achieving robust estimation is presented. It considers the limiting case when the upper and lower bounds tend to zero and leads to B-robust estimators. Because of the complexity of the Lévy distributions, this leads to identification problems.

Options are some of the most traded financial instruments and computing their price is a central task in financial mathematics and in practice. Consequently, the development of numerical algorithms for pricing options is an active field of research. In general, evaluating the price of a specific option relies on the properties of the stochastic model used for the underlying asset price. In this thesis we develop efficient and accurate numerical methods for option pricing in a specific class of models: polynomial models. They are a versatile tool for financial modeling and have useful properties that can be exploited for option pricing.
Significant challenges arise when developing option pricing techniques. For instance, the underlying model might have a high-dimensional parameter space. Furthermore, treating multi-asset options yields high-dimensional pricing problems. Therefore, the pricing method should be able to handle high dimensionality. Another important aspect is the efficiency of the algorithm: in real-world applications, option prices need to be delivered within short periods of time, making the algorithmic complexity a potential bottleneck. In this thesis, we address these challenges by developing option pricing techniques that are able to handle low and high-dimensional problems, and we propose complexity reduction techniques.
The thesis consists of four parts:
First, we present a methodology for European and American option pricing. The method uses the moments of the underlying price process to produce monotone sequences of lower and upper bounds of the option price. The bounds are obtained by solving a sequence of polynomial optimization problems. As the order of the moments increases, the bounds become sharper and eventually converge to the exact price under appropriate assumptions.
Second, we develop a fast algorithm for the incremental computation of nested block triangular matrix exponentials. This algorithm allows for an efficient incremental computation of the moment sequence of polynomial jump-diffusions. In other words, moments of order 0, 1, 2, 3... are computed sequentially until a dynamically evaluated criterion tells us to stop. The algorithm is based on the scaling and squaring technique and reduces the complexity of the pricing algorithms that require such an incremental moment computation.
Third, we develop a complexity reduction technique for high-dimensional option pricing. To this end, we first consider the option price as a function of model and payoff parameters. Then, the tensorized Chebyshev interpolation is used on the parameter space to increase the efficiency in computing option prices, while maintaining the required accuracy. The high dimensionality of the problem is treated by expressing the tensorized interpolation in the tensor train format and by deriving an efficient way, which is based on tensor completion, to approximate the interpolation coefficients.
Lastly, we propose a methodology for pricing single and multi-asset European options. The approach is a combination of Monte Carlo simulation and function approximation. We address the memory limitations that arise when treating very high-dimensional applications by combining the method with optimal sampling strategies and using a randomized algorithm to reduce the storage complexity of the approach.
The obtained numerical results show the effectiveness of the algorithms developed in this thesis.

From medical support to education and remote work, our everyday lives increasingly depend on Internet performance. When users experience poor performance, however, the decentralization of the Internet allows limited visibility into which network is responsible. As a result, users are promised Service Level Agreements (SLAs) they cannot verify, regulators make rules they cannot enforce, and networks with competitive performance cannot reliably showcase it to attract new customers. To change this, researchers have proposed transparency protocols, which rely on networks reporting on their own performance. However, these proposals would be hard to adopt because i) they require substantial network resources for extracting and publishing the performance information, or ii) they require cooperative networks that honestly report their performance against their self-interests, or iii) they threaten the anonymizing capability of Tor-like networks by violating their limited visibility assumptions and introducing a new attack vector against them.This dissertation enables network users to estimate the loss and delay of individual networks in an efficient and accurate manner, despite networks generating and controlling the performance data and potentially wanting to exaggerate their performance. It also proposes the first transparency protocol that tries to preserve the capabilities of anonymity networks.The key to efficient and accurate performance monitoring is i) creating incentives for networks to be honest by causing dishonest networks to get into conflict with their neighbors, and ii) combining these incentives with mathematical tools that "tie together" different aspects of network performance.The key to anonymity-preserving monitoring is the insight that users can benefit from transparency even when networks expose coarser-than-per-packet performance information, which at the same time hides sensitive communication patterns and improves anonymity.Our thesis is that efficient and accurate Internet performance transparency is possible and that we can ease the tussle between transparency and user anonymity.