Concept

Maximum a posteriori

Résumé
L'estimateur du maximum a posteriori (MAP), tout comme la méthode du maximum de vraisemblance, est une méthode pouvant être utilisée afin d'estimer un certain nombre de paramètres inconnus, comme les paramètres d'une densité de probabilité, reliés à un échantillon donné. Cette méthode est très liée au maximum de vraisemblance mais en diffère toutefois par la possibilité de prendre en compte un a priori non uniforme sur les paramètres à estimer. Définition La méthode du maximum a posteriori consiste à trouver la valeur \hat{\theta}_{\mathrm{MAP}} qui maximise la grandeur L(\theta)p(\theta) où L(\theta) est la vraisemblance et p(\theta) la distribution a priori des paramètres \theta. Ainsi, l'estimateur au maximum de vraisemblance est l'estimateur MAP pour une distribution a priori uniforme. Applications Astronomie Le maximum a posteriori est utilisé en astronomie en complément de téle
À propos de ce résultat
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.
Publications associées (100)

A posteriori error estimation for partial differential equations with random input data

Diane Sylvie Guignard

This thesis is devoted to the derivation of error estimates for partial differential equations with random input data, with a focus on a posteriori error estimates which are the basis for adaptive strategies. Such procedures aim at obtaining an approximation of the solution with a given precision while minimizing the computational costs. If several sources of error come into play, it is then necessary to balance them to avoid unnecessary work. We are first interested in problems that contain small uncertainties approximated by finite elements. The use of perturbation techniques is appropriate in this setting since only few terms in the power series expansion of the exact random solution with respect to a parameter characterizing the amount of randomness in the problem are required to obtain an accurate approximation. The goal is then to perform an error analysis for the finite element approximation of the expansion up to a certain order. First, an elliptic model problem with random diffusion coefficient with affine dependence on a vector of independent random variables is studied. We give both a priori and a posteriori error estimates for the first term in the expansion for various norms of the error. The results are then extended to higher order approximations and to other sources of uncertainty, such as boundary conditions or forcing term. Next, the analysis of nonlinear problems in random domains is proposed, considering the one-dimensional viscous Burgers' equation and the more involved incompressible steady-state Navier-Stokes equations. The domain mapping method is used to transform the equations in random domains into equations in a fixed reference domain with random coefficients. We give conditions on the mapping and the input data under which we can prove the well-posedness of the problems and give a posteriori error estimates for the finite element approximation of the first term in the expansion. Finally, we consider the heat equation with random Robin boundary conditions. For this parabolic problem, the time discretization brings an additional source of error that is accounted for in the error analysis. The second part of this work consists in the analysis of a random elliptic diffusion problem that is approximated in the physical space by the finite element method and in the stochastic space by the stochastic collocation method on a sparse grid. Considering a random diffusion coefficient with affine dependence on a vector of independent random variables, we derive a residual-based a posteriori error estimate that controls the two sources of error. The stochastic error estimator is then used to drive an adaptive sparse grid algorithm which aims at alleviating the so-called curse of dimensionality inherent to tensor grids. Several numerical examples are given to illustrate the performance of the adaptive procedure.
EPFL2016

Automatic L2 Regularization for Multiple Generalized Additive Models

Yousra El Bachir

Multiple generalized additive models are a class of statistical regression models wherein parameters of probability distributions incorporate information through additive smooth functions of predictors. The functions are represented by basis function expansions, whose coefficients are the regression parameters. The smoothness is induced by a quadratic roughness penalty on the functions’ curvature, which is equivalent to a weighted L2L_2 regularization controlled by smoothing parameters. Regression fitting relies on maximum penalized likelihood estimation for the regression coefficients, and smoothness selection relies on maximum marginal likelihood estimation for the smoothing parameters. Owing to their nonlinearity, flexibility and interpretability, generalized additive models are widely used in statistical modeling, but despite recent advances, reliable and fast methods for automatic smoothing in massive datasets are unavailable. Existing approaches are either reliable, complex and slow, or unreliable, simpler and fast, so a compromise must be made. A bridge between these categories is needed to extend use of multiple generalized additive models to settings beyond those possible in existing software. This thesis is one step in this direction. We adopt the marginal likelihood approach to develop approximate expectation-maximization methods for automatic smoothing, which avoid evaluation of expensive and unstable terms. This results in simpler algorithms that do not sacrifice reliability and achieve state-of-the-art accuracy and computational efficiency. We extend the proposed approach to big-data settings and produce the first reliable, high-performance and distributed-memory algorithm for fitting massive multiple generalized additive models. Furthermore, we develop the underlying generic software libraries and make them accessible to the open-source community.
EPFL2019

Les mathématiques financières robustes, le maritime et les FFA

Jérôme Reboulleau

Powerful mathematical tools have been developed for trading in stocks and bonds, but other markets that are equally important for the globalized world have to some extent been neglected. We decided to study the shipping market as an new area of development in mathematical finance. The market in shipping derivatives (FFA and FOSVA) has only been developed after 2000 and now exhibits impressive growth. Financial actors have entered the field, but it is still largely undiscovered by institutional investors. The first part of the work was to identify the characteristics of the market in shipping, i.e. the segmentation and the volatility. Because the shipping business is old-fashioned, even the leading actors on the world stage (ship owners and banks) are using macro-economic models to forecast the rates. If the macro-economic models are logical and make sense, they fail to predict. For example, the factor port congestion has been much cited during the last few years, but it is clearly very difficult to control and is simply an indicator of traffic. From our own experience it appears that most ship owners are in fact market driven and rather bad at anticipating trends. Due to their ability to capture large moves, we chose to consider Lévy processes for the underlying price process. Compared with the macro-economic approach, the main advantage is the uniform and systematic structure this imposed on the models. We get in each case a favorable result for our technology and a gain in forecasting accuracy of around 10% depending on the maturity. The global distribution is more effectively modelled and the tails of the distribution are particularly well represented. This model can be used to forecast the market but also to evaluate the risk, for example, by computing the VaR. An important limitation is the non-robustness in the estimation of the Lévy processes. The use of robust estimators reinforces the information obtained from the observed data. Because maximum likelihood estimation is not easy to compute with complex processes, we only consider some very general robust score functions to manage the technical problems. Two new class of robust estimators are suggested. These are based on the work of F. Hampel ([29]) and P. Huber ([30]) using influence functions. The main idea is to bound the maximum likelihood score function. By doing this a bias is created in the parameters estimation, which can be corrected by using a modification of the following type and as proposed by F. Hampel. The procedure for finding a robust estimating equation is thus decomposed into two consecutive steps : Subtract the bias correction and then Bound the score function. In the case of complex Lévy processes, the bias correction is difficult to compute and generally unknown. We have developed a pragmatic solution by inverting the Hampel's procedure. Bound the score function and then Correct for the bias. The price is a loss of the theoretical properties of our estimators, besides the procedure converges to maximum likelihood estimate. A second solution to for achieving robust estimation is presented. It considers the limiting case when the upper and lower bounds tend to zero and leads to B-robust estimators. Because of the complexity of the Lévy distributions, this leads to identification problems.
EPFL2008
Afficher plus
Concepts associés (16)
Inférence bayésienne
vignette|Illustration comparant les approches fréquentiste et bayésienne (Christophe Michel, 2018). L’inférence bayésienne est une méthode d'inférence statistique par laquelle on calcule les probabili
Fonction de vraisemblance
vignette|Exemple d'une fonction de vraisemblance pour le paramètre d'une Loi de Poisson En théorie des probabilités et en statistique, la fonction de vraisemblance (ou plus simplement vraisemblance) e
Maximum de vraisemblance
En statistique, l'estimateur du maximum de vraisemblance est un estimateur statistique utilisé pour inférer les paramètres de la loi de probabilité d'un échantillon donné en recherchant les valeurs
Afficher plus
Cours associés (18)
FIN-403: Econometrics
The course covers basic econometric models and methods that are routinely applied to obtain inference results in economic and financial applications.
PHYS-467: Machine learning for physicists
Machine learning and data analysis are becoming increasingly central in sciences including physics. In this course, fundamental principles and methods of machine learning will be introduced and practised.
COM-500: Statistical signal and data processing through applications
Building up on the basic concepts of sampling, filtering and Fourier transforms, we address stochastic modeling, spectral analysis, estimation and prediction, classification, and adaptive filtering, with an application oriented approach and hands-on numerical exercises.
Afficher plus