Publication

A data-driven shock capturing approach for discontinuous Galekin methods

Jan Sickmann Hesthaven
2018
Journal paper
Abstract

We propose a data-driven artificial viscosity model for shock capturing in discontinuous Galerkin methods. The proposed model trains a multi-layer feedforward network to map from the element-wise solution to a smoothness indicator, based on which the artificial viscosity is computed. The data set for the training of the network is obtained using canonical functions. The compactness of the data set, which is critical to the success of training the network, is ensured by normalization and the adjustment of the range of the smoothness indicator. The network is able to recover the expected smoothness much more reliably than the original averaged modal decay model. Several smooth and non-smooth test cases are considered to investigate the performance of this approach. Convergence tests show that the proposed model recovers the accuracy of the corresponding linear schemes for smooth regions. For non-smooth flows, the model is observed to suppress spurious oscillations well.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related concepts (32)
Convergence tests
In mathematics, convergence tests are methods of testing for the convergence, conditional convergence, absolute convergence, interval of convergence or divergence of an infinite series . If the limit of the summand is undefined or nonzero, that is , then the series must diverge. In this sense, the partial sums are Cauchy only if this limit exists and is equal to zero. The test is inconclusive if the limit of the summand is zero. This is also known as the nth-term test, test for divergence, or the divergence test.
Radius of convergence
In mathematics, the radius of convergence of a power series is the radius of the largest disk at the center of the series in which the series converges. It is either a non-negative real number or . When it is positive, the power series converges absolutely and uniformly on compact sets inside the open disk of radius equal to the radius of convergence, and it is the Taylor series of the analytic function to which it converges.
Feedforward neural network
A feedforward neural network (FNN) is one of the two broad types of artificial neural network, characterized by direction of the flow of information between its layers. Its flow is uni-directional, meaning that the information in the model flows in only one direction—forward—from the input nodes, through the hidden nodes (if any) and to the output nodes, without any cycles or loops, in contrast to recurrent neural networks, which have a bi-directional flow.
Show more
Related publications (34)

Generalization of Scaled Deep ResNets in the Mean-Field Regime

Volkan Cevher, Grigorios Chrysos, Fanghui Liu

Despite the widespread empirical success of ResNet, the generalization properties of deep ResNet are rarely explored beyond the lazy training regime. In this work, we investigate scaled ResNet in the limit of infinitely deep and wide neural networks, of wh ...
2024

Privacy-preserving federated neural network training and inference

Sinem Sav

Training accurate and robust machine learning models requires a large amount of data that is usually scattered across data silos. Sharing, transferring, and centralizing the data from silos, however, is difficult due to current privacy regulations (e.g., H ...
EPFL2023

One- and two-step semi-Lagrangian integrators for arbitrary Lagrangian-Eulerian-finite element two-phase flow simulations

John Richard Thome, Gustavo Rabello Dos Anjos, Gustavo Charles Peixoto de Oliveira

Semi-Lagrangian (SL) schemes are of utmost relevance to simulate two-phase flows where advection dominates. The combination of SL schemes with the finite element (FE) method and arbitrary Lagrangian-Eulerian (ALE) dynamic meshes yields a strong ingredient ...
WILEY2022
Show more
Related MOOCs (10)
Introduction to optimization on smooth manifolds: first order methods
Learn to optimize on smooth, nonlinear spaces: Join us to build your foundations (starting at "what is a manifold?") and confidently implement your first algorithm (Riemannian gradient descent).
Analyse I
Le contenu de ce cours correspond à celui du cours d'Analyse I, comme il est enseigné pour les étudiantes et les étudiants de l'EPFL pendant leur premier semestre. Chaque chapitre du cours correspond
Analyse I (partie 1) : Prélude, notions de base, les nombres réels
Concepts de base de l'analyse réelle et introduction aux nombres réels.
Show more

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.