Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
In this thesis, we propose model order reduction techniques for high-dimensional PDEs that preserve structures of the original problems and develop a closure modeling framework leveraging the Mori-Zwanzig formalism and recurrent neural networks. Since high-fidelity approximations of PDEs often result in a large number of degrees of freedom, the need for iterative evaluations for numerical optimizations and rapid feedback is computationally challenging.The first part of this thesis is devoted to conserving the high-dimensional equation's invariants, symmetries, and structures during the reduction process. Traditional reduction techniques are not guaranteed to yield stable reduced systems, even if the target problem is stable. In the context of fluid flows, the skew-symmetric structure of the problem entails the preservation of the kinetic energy of the system. By preserving the same structure at the level of the reduced model, we obtain enhanced stability, and accuracy and the reduced model acquires physical significance by preserving a surrogate of the energy of the original problem. Next, we focus on Hamiltonian systems, which, being driven by symmetry, are a source of great interest in the reduction community. It is well known that the breaking of these symmetries in the reduced model is accompanied by a blowup of the system energy and flow volume. In this thesis, geometric reduced models for Hamiltonian systems are further developed and combined with the dynamically orthogonal methods, addressing the poor reducibility in time of advection-dominated problems. The reduced solution is expressed as a linear combination of a finite number of modes and coincides with the symplectic projection of the high-fidelity Hamiltonian problem onto the tangent space of the approximating manifold. An error surrogate is used to monitor the approximation ability of the reduced model and make a change in the rank of the approximating system if necessary. The method is further developed through a combination of DEIM and DMD to reduce non-polynomial nonlinearities while preserving the symplectic structure of the problem and applied to the Vlasov-Poisson system.In the second part of the thesis, we consider several data-driven methods to address the poor accuracy in the under-resolved regime for Galerkin reduced models via a closure term. The closure term is developed systematically from the Mori-Zwanzig formalism by introducing projection operators on the spaces of resolved and unresolved scales, thus resulting in an additional memory integral term. The interaction between different scales turns out to be nonlocal in time and dominated by a high-dimensional orthogonal dynamics equation, which cannot be solved precisely and efficiently. Several classical methods in the field of statistical mechanics are used to approximate the memory term, exploiting the finiteness of the memory kernel support. We conclude this thesis by showing through numerical experiments how long short-term memory networks, i.e., machine learning structures characterized by feedback connections, represent a valid tool for approximating the additional memory term.
Wulfram Gerstner, Johanni Michael Brea
Edoardo Charbon, Claudio Bruschini, Andrei Ardelean, Paul Mos, Yang Lin