**Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?**

Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur GraphSearch.

Publication# Robust Portfolio Optimization

Résumé

Since the 2008 Global Financial Crisis, the financial market has become more unpredictable than ever before, and it seems set to remain so in the forseeable future. This means an investor faces unprecedented risks, hence the increasing need for robust portfolio optimization to protect them against uncertainty, which is potentially devastating if unattended yet ignored in the classical Markowitz model, whose another deficiency is the absence of higher moments in its assumption of the distribution of asset returns. We establish an equivalence between the Markowitz model and the portfolio return value-at-risk optimization problem under multivariate normality of asset returns, so that we can add these excluded features into the former implicitly by incorporating them into the latter. We also provide a probabilistic smoothing spline approximation method and a deterministic model within the location-scale framework under elliptical distribution of the asset returns to solve the robust portfolio return value-at-risk optimization problem. In particular for the deterministic model, we introduce a novel eigendecomposition uncertainty set which lives in the positive definite space for the scale matrix without compromising on the computational complexity and conservativeness of the optimization problem, invent a method to determine the size of the involved uncertainty sets, test it out on real data, and explore its diversification properties. Although the value-at-risk has been the standard risk measure adopted by the banking and insurance industry since the early nineties, it has since attracted many criticisms, in particular from McNeil et al. (2005) and the Basel Committee on Banking Supervision in 2012, also known as Basel 3.5. Basel 4 even suggests a move away from the `what" value-at-risk to the `

what-if" conditional value-at-risk' measure. We shall see that the former may be replaced with the latter or even other risk measures in our formulations easily.

Official source

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Concepts associés

Chargement

Publications associées

Chargement

Concepts associés (28)

Uncertainty

Uncertainty refers to epistemic situations involving imperfect or unknown information. It applies to predictions of future events, to physical measurements that are already made, or to the unknown. U

Loi normale multidimensionnelle

En théorie des probabilités, on appelle loi normale multidimensionnelle, ou normale multivariée ou loi multinormale ou loi de Gauss à plusieurs variables, la loi de probabilité qui est la généralisat

Marché financier

Un marché financier est un marché sur lequel des personnes physiques, des sociétés privées et des institutions publiques peuvent négocier des titres financiers, matières premières et autres actifs,

Publications associées (29)

Chargement

Chargement

Chargement

The increasing interest in using statistical extreme value theory to analyse environmental data is mainly driven by the large impact extreme events can have. A difficulty with spatial data is that most existing inference methods for asymptotically justified models for extremes are computationally intractable for data at several hundreds of sites, a number easily attained or surpassed by the output of physical climate models or satellite-based data sets. This thesis does not directly tackle this problem, but it provides some elements that might be useful in doing so. The first part of the thesis contains a pointwise marginal analysis of satellite-based measurements of total column ozone in the northern and southern mid-latitudes. At each grid cell, the r-largest order statistics method is used to analyse extremely low and high values of total ozone, and an autoregressive moving average time series model is used for an analogous analysis of mean values. Both models include the same set of global covariates describing the dynamical and chemical state of the atmosphere. The results show that influence of the covariates is captured in both the ``bulk'' and the tails of the statistical distribution of ozone. For some covariates, our results are in good agreement with findings of earlier studies, whereas unprecedented influences are retrieved for two dynamical covariates. The second part concerns the frameworks of multivariate and spatial modelling of extremes. We review one class of multivariate extreme value distributions, the so-called Hüsler--Reiss model, as well as its spatial extension, the Brown--Resnick process. For the former, we provide a detailed discussion of its parameter matrix, including the case of degeneracy, which arises if the correlation matrices of underlying multivariate Gaussian distributions are singular. We establish a simplification for computing the partial derivatives of the exponent function of these two models. As consequence of the considerably reduced number of terms in each partial derivative, computation time for the multivariate joint density of these models can be reduced, which could be helpful for (composite) likelihood inference. Finally, we propose a new variant of the Brown--Resnick process based on the Karhunen--Loève expansion of its underlying Gaussian process. As an illustration, we use composite likelihood to fit a simplified version of our model to a hindcast data set of wave heights that shows highly dependent extremes.

xtreme value analysis is concerned with the modelling of extreme events such as floods and heatwaves, which can have large impacts. Statistical modelling can be useful to better assess risks even if, due to scarcity of measurements, there is inherently very large residual uncertainty in any analysis. Driven by the increase in environmental databases, spatial modelling of extremes has expanded rapidly in the last decade. This thesis presents contributions to such analysis.
The first chapter is about likelihood-based inference in the univariate setting and investigates the use of bias-correction and higher-order asymptotic methods for extremes, highlighting through examples and illustrations the unique challenge posed by data scarcity. We focus on parametric modelling of extreme values, which relies on limiting distributional results and for which, as a result, uncertainty quantification is complicated. We find that, in certain cases, small-sample asymptotic methods can give improved inference by reducing the error rate of confidence intervals. Two data illustrations, linked to assessment of the frequency of extreme rainfall episodes in Venezuela and the analysis of survival of supercentenarians, illustrate the methods developed.
In the second chapter, we review the major methods for the analysis of spatial extremes models. We highlight the similarities and provide a thorough literature review along with novel simulation algorithms. The methods described therein are made available through a statistical software package.
The last chapter focuses on estimation for a Bayesian hierarchical model derived from a multivariate generalized Pareto process. We review approaches for the estimation of censored components in models derived from (log)-elliptical distributions, paying particular attention to the estimation of a high-dimensional Gaussian distribution function via Monte Carlo methods. The impacts of model misspecification and of censoring are explored through extensive simulations and we conclude with a case study of rainfall extremes in Eastern Switzerland.

The advent of wireless communication technologies has created a paradigm shift in the accessibility of communication. With it has come an increased demand for throughput, a trend that is likely to increase further in the future. A key aspect of these challenges is to develop low complexity algorithms and architectures that can take advantage of the nature of the wireless medium like broadcasting and physical layer cooperation. In this thesis, we consider several problems in the domain of low complexity coding, relaying and scheduling for wireless networks. We formulate the Pliable Index Coding problem that models a server trying to send one or more new messages over a noiseless broadcast channel to a set of clients that already have a subset of messages as side information. We show through theoretical bounds and algorithms, that it is possible to design short length codes, poly-logarithmic in the number of clients, to solve this problem. The length of the codes are exponentially better than those possible in a traditional index coding setup. Next, we consider several aspects of low complexity relaying in half-duplex diamond networks. In such networks, the source transmits information to the destination through $n$ half-duplex intermediate relays arranged in a single layer. The half-duplex nature of the relays implies that they can either be in a listening or transmitting state at any point of time. To achieve high rates, there is an additional complexity of optimizing the schedule (i.e. the relative time fractions) of the relaying states, which can be $2^n$ in number. Using approximate capacity expressions derived from the quantize-map-forward scheme for physical layer cooperation, we show that for networks with $n\leq 6$ relays, the optimal schedule has atmost $n+1$ active states. This is an exponential improvement over the possible $2^n$ active states in a schedule. We also show that it is possible to achieve at least half the capacity of such networks (approximately) by employing simple routing strategies that use only two relays and two scheduling states. These results imply that the complexity of relaying in half-duplex diamond networks can be significantly reduced by using fewer scheduling states or fewer relays without adversely affecting throughput. Both these results assume centralized processing of the channel state information of all the relays. We take the first steps in analyzing the performance of relaying schemes where each relay switches between listening and transmitting states randomly and optimizes their relative fractions using only local channel state information. We show that even with such simple scheduling, we can achieve a significant fraction of the capacity of the network. Next, we look at the dual problem of selecting the subset of relays of a given size that has the highest capacity for a general layered full-duplex relay network. We formulate this as an optimization problem and derive efficient approximation algorithms to solve them. We end the thesis with the design and implementation of a practical relaying scheme called QUILT. In it the relay opportunistically decodes or quantizes its received signal and transmits the resulting sequence in cooperation with the source. To keep the complexity of the system low, we use LDPC codes at the source, interleaving at the relays and belief propagation decoding at the destination. We evaluate our system through testbed experiments over WiFi.