Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
In time series analysis state-space models provide a wide and flexible class. The basic idea is to describe an unobservable phenomenon of interest on the basis of noisy data. The first constituent of such a model is the so-called state equation, which characterises the unobserved state of the system. The second part is the so-called observation equation, which describes the observable variables as a function of the unobserved state. The purpose of the analysis of state-space models is to infer the relevant properties of the unobserved phenomenon from the observed data. A powerful tool to do so in the linear Gaussian setting is the Kalman filter. It provides a simple recursive computational scheme for the conditional expectation of the unobserved state given the observed data. However, since the Kalman filter is is linear in the data it is sensitive to outlying observations. A certain robustness of the procedure can easily be achieved by abandoning the Gaussianity assumption. On the other hand, the assumption of linearity is frequently found to be inadequate in practical situations and therefore needs to be generalised as well. However, without linearity and Gaussianity the simple recursive formulae of the Kalman filter are lost and the recursive computation of the conditional expectation must be replaced by approximations of ratios of high dimensional integral. Numerical and Monte Carlo integration techniques have been proposed for this purpose. The former yield powerful tools in low dimensional problems but suffer in high dimensional settings from the resulting computational burden. On the other hand Monte Carlo integration techniques lead to so-called particle filters, which provide computationally attractive tools even applicable in high dimensional problems. Although particle filters have some appealing properties they are not entirely convincing: the approximation error is relatively large; outliers may significantly affect the computational efficiency and in practical situations it is often very difficult to tune them adequately. Hence, we propose to adopt another integration method, the so-called scrambled net integration – a hybrid of numerical and Monte Carlo integration techniques. For the non-iterative approximation of integrals, this method has been proven to be often much more accurate than Monte Carlo methods as well as to deal well with the computational burden in high dimensions. We implement scrambled net integration in the general Kalman recursions and derive new prediction, filtering and smoothing algorithms. Further, we show that the approximation error remains stable during iterations and that the accuracy of the resulting state estimates is higher than that of particle filters. The resulting algorithms are more robust than the particle filter algorithm since their computational complexity is not influenced by outlying observations. In addition, we prove a central limit theorem for the estimator of the filter's expectation. Scrambled nets can also be used for hyper-parameter estimation since the likelihood of a general state-space model is a byproduct of the general Kalman recursions. They not only provide an approximation to the likelihood function but also a suitable grid over which it is maximised. The resulting estimation procedure is particularly attractive when dealing with high dimensional hyper-parameter spaces. Finally, one of the main difficulties when studying the properties of an unobserved phenomenon is designing a good general state-space model. Besides the determination of state and observation equations the problem of choosing appropriate noise distributions is crucial since they express the inherent uncertainty of the model used. From a practical point of view it is frequently completely unclear how to choose these noise distributions. Rather than relying on an uneducated guess of a single distributional model we propose to use several fundamentally different distributions and derive an adaptive filter providing a data-driven approach, which is naturally robust with respect to deviations from a distributional model.
Fabio Nobile, Thomas Simon Spencer Trigo Trindade
, ,
Julien René Pierre Fageot, Sadegh Farhadkhani, Oscar Jean Olivier Villemaud, Le Nguyen Hoang