**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Publication# COVID-19: data-driven dynamic asset allocation in times of pandemic

Abstract

The COVID-19 pandemic has demonstrated the importance and value of multi-period asset allocation strategies responding to rapid changes in market behavior. In this article, we formulate and solve a multi-stage stochastic optimization problem, choosing the indices' optimal weights dynamically in line with a customized data-driven Bellman's procedure. We use basic asset classes (equities, fixed income, cash and cash equivalents) and five corresponding indices for the development of optimal strategies. In our multi-period setup, the probability model describing the uncertainty about the value of asset returns changes over time and is scenario-specific. Given a high enough variation of model parameters, this allows to account for possible crises events. In this article, we construct optimal allocation strategies accounting for the influence of the COVID-19 pandemic on financial returns. We observe that the growth in the number of infections influences financial markets and makes assets' behavior more dependent. Solving the multi-stage asset allocation problem dynamically, we (i) propose a fully data-driven method to estimate time-varying conditional probability models and (ii) we implement the optimal quantization procedure for the scenario approximation. We consider optimality of quantization methods in the sense of minimal distances between continuous-state distributions and their discrete approximations. Minimizing the well-known Kantorovich-Wasserstein distance at each time stage, we bound the approximation error, enhancing accuracy of the decision-making. Using the first-stage allocation strategy developed via our method, we observe ca. 10% wealth growth on average out-of-sample with a maximum of ca. 20% and a minimum of ca. 5% over a three-month period. Further, we demonstrate that monthly reoptimization aids in reducing uncertainty at a cost of maximal wealth. Also, we show that optimistically off setted distribution parameters lead to a reduction in out-of-sample wealth due to the COVID-19 crisis.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts

Loading

Related publications

Loading

Related publications (5)

Loading

Loading

Loading

Related concepts (17)

Decision-making

In psychology, decision-making (also spelled decision making and decisionmaking) is regarded as the cognitive process resulting in the selection of a belief or a course of action among several possib

Uncertainty

Uncertainty refers to epistemic situations involving imperfect or unknown information. It applies to predictions of future events, to physical measurements that are already made, or to the unknown. U

Stochastic programming

In the field of mathematical optimization, stochastic programming is a framework for modeling optimization problems that involve uncertainty. A stochastic program is an optimization problem in which

Anjali Devi Vanapilli Nursimulu

Is the human brain wired for wealth? The setting is the high-velocity financial environment. Undoubtedly, the development of sophisticated derivative instruments has improved the allocation of risk across economies, highlighting the nexus between banking and finance and economic development. But bouts of irrational exuberance raise concerns about the unprecedented pace of financial innovations. The recent past has witnessed the rapid growth of algorithmic and automated trading as competitive strategies to capture gains, for example, from zero-latency trades. At the same time, financial headlines have been set ablaze by behavioral treatises related to our evolutionary hardwiring for such emotions as greed and fear; seemingly, we are not predisposed to make rational financial decisions. This research takes a step back and explores to what extent the human brain is well adapted to assess risk and reward in financial markets. The research implemented three novel behavioral experiments, one of which used high resolution electrical neuroimaging and is organized along two distinct themes that relate to specific features of financial trading. The first theme concerns the speed of financial decisions. Are decisions made under short time constraint more likely to be biased than under less time pressure? How fast does the brain process financial information? The second theme is (financial) volatility. To what extent are individuals adept at making financial forecasts? The dearth of knowledge on these topics prompted the inductive and transversal bent of this research. Several new insights emerged that challenge the behavioral views of financial decision-making. First, fast decisions as observed on the trading floor is well-captured by moment-based theory, a workhorse of Classical Finance. This is in marked contrast to the generally accepted view that fast decisions impose a bound on rationality; rational decisions are assumed to take time-to-build. Surprisingly, biases loom larger with longer decision time. Second, analysis of electrical brain signals indicates that the brain is very fast at extracting complex monetary reward features from the environment. The behavioral and electrophysiological findings highlight the need to better calibrate the speed of decisions. Third, decision-making (here, learning and forecasting) may go awry when individuals face financial volatility; but not always. The overall findings lead to the proposition that the border between rational and irrational financial decisions is much more razor-thin than opined by behaviorists. The research paints a complex picture about how emotions affect decisions. High emotional quotient and long investment time horizon respectively distinguishes professional traders and long-term investors from novice traders and investors chasing short-term gains; the latter two groups are more likely to react emotionally. Real experts in high-velocity environments are rare and forecasts are bound to be imperfect; algorithmic and automated trades are important aids to diverse players in financial markets. Last but not least, extrapolating from the findings, regulatory policy should concern the transparency of financial products and technology-enabled trades and preferential access to trading platforms. Financial institutions, on the other hand, ought to review their business models paying particular attention to their reward system.

Decision making and planning with partial state information is a problem faced by all forms of intelligent entities. The formulation of a problem under partial state information leads to an exorbitant set of choices with associated probabilistic outcomes making its resolution difficult when using traditional planning methods. Human beings have acquired the ability of acting under uncertainty through education and self-learning. Transferring our know-how to artificial agents and robots will make it faster for them to learn and even improve upon us in tasks in which incomplete knowledge is available, which is the objective of this thesis. We model how humans reason with respect to their beliefs and transfer this knowledge in the form of a parameterised policy, following a Programming by Demonstration framework, to a robot apprentice for two spatial navigation tasks: the first task consists of localising a wooden block on a table and for the second task a power socket must be found and connected. In both tasks the human teacher and robot apprentice only rely on haptic and tactile information. We model the human and robot's beliefs by a probability density function which we update through recursive Bayesian state space estimation. To model the reasoning processes of human subjects performing the search tasks we learn a generative joint distribution over beliefs and actions (end-effector velocities) which were recorded during the executions of the task. For the first search task the direct mapping from belief to actions is learned whilst for the second task we incorporate a cost function used to adapt the policy parameters in a Reinforcement Learning framework and show a considerable improvement over solely learning the behaviour with respect to the distance taken to accomplish the task. Both search tasks above can be considered as active localisation as the uncertainty originates only from the position of the agent in the world. We consider searches in which both the position of the robot and features of the environment are uncertain. Given the unstructured nature of the belief a histogram parametrisation of the joint distribution of the robots position and features is necessary. However, naively doing so becomes quickly intractable as the space and time complexity is exponential. We demonstrate that by only parametrising the marginals and by memorising the parameters of the measurement likelihood functions we can recover the exact same solution as the naive parametrisations at a cost which is linear in space and time complexity.

Aude Billard, Guillaume Pierre Luc De Chambrier

Decision making and planning for which the state information is only partially available is a problem faced by all forms of intelligent entities they being either virtual, synthetic or biological. The standard approach to mathematically solve such a decisional problem is to formulate it as a partially observable decision process (POMDP) and apply the same optimisation techniques used in the Markov decision process (MDP). However, applying naively the same methodology to solve MDPs as with POMDPs makes the problem computationally intractable. To address this problem, we take a programming by demonstration approach to provide a solution to the POMDP in continuous state and action space. In this work, we model the decision making process followed by humans when searching blindly for an object on a table. We show that by representing the belief of the human’s position in the environment by a particle filter (PF) and learning a mapping from this belief to their end effector velocities with a Gaussian mixture model (GMM), we can model the human’s search process and reproduce it for any agent. We further categorize the type of behaviours demonstrated by humans as being either risk-prone or risk-averse and find that more than 70% of the human searches were considered to be risk-averse. We contrast the performance of this human-inspired search model with respect to greedy and coastal navigation search methods. Our evaluation metric is the distance taken to reach the goal and how each method minimises the uncertainty. We further analyse the control policy of the coastal navigation and GMM search models and argue that taking into account uncertainty is more efficient with respect to distance travelled to reach the goal.

2014