**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Publication# Adaptive reduced basis finite element heterogeneous multiscale method

Abstract

An adaptive reduced basis finite element heterogeneous multiscale method (RB-FE-HMM) is proposed for elliptic problems with multiple scales. The multiscale method is based on the RB-FE-HMM introduced in [A. Abdulle, Y. Bai, Reduced basis finite element heterogeneous multiscale method for high-order discretizations of elliptic homogenization problems, J. Comput. Phys. 231 (21) (2012) 7014-7036]. It couples a macroscopic solver with effective data recovered from the solution of micro problems solved on sampling domains. Unlike classical numerical homogenization methods, the micro problems are computed in a finite dimensional space spanned by a small number of accurately computed representative micro solutions (the reduced basis) obtained by a greedy algorithm in an offline stage. In this paper we present a residual-based a posteriori error analysis in the energy norm as well as an a posteriori error analysis in quantities of interest. For both type of adaptive strategies, rigorous a posteriori error estimates are derived and corresponding error estimators are proposed. In contrast to the adaptive finite element heterogeneous multiscale method (FE-HMM), there is no need to adapt the micro mesh simultaneously to the macroscopic mesh refinement. Up to an offline preliminary stage, the RB-FE-HMM has the same computational complexity as a standard adaptive FEM for the effective problem. Two and three dimensional numerical experiments confirm the efficiency of the RB-FE-HMM and illustrate the improvements compared to the adaptive FE-HMM. (C) 2013 Elsevier B.V. All rights reserved.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related MOOCs (29)

Related concepts (33)

Related publications (57)

Digital Signal Processing I

Basic signal processing concepts, Fourier analysis and filters. This module can
be used as a starting point or a basic refresher in elementary DSP

Digital Signal Processing II

Adaptive signal processing, A/D and D/A. This module provides the basic
tools for adaptive filtering and a solid mathematical framework for sampling and
quantization

Digital Signal Processing III

Advanced topics: this module covers real-time audio processing (with
examples on a hardware board), image processing and communication system design.

Estimator

In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguished. For example, the sample mean is a commonly used estimator of the population mean. There are point and interval estimators. The point estimators yield single-valued results. This is in contrast to an interval estimator, where the result would be a range of plausible values.

Computational complexity

In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) and memory storage requirements. The complexity of a problem is the complexity of the best algorithms that allow solving the problem. The study of the complexity of explicitly given algorithms is called analysis of algorithms, while the study of the complexity of problems is called computational complexity theory.

Computational complexity theory

In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used.

Annalisa Buffa, Pablo Antolin Sanchez, Rafael Vazquez Hernandez, Luca Coradello

The focus of this work is on the development of an error-driven isogeometric framework, capable of automatically performing an adaptive simulation in the context of second- and fourth-order, elliptic partial differential equations defined on two-dimensiona ...

2020, , ,

The focus of this work is on the development of an error-driven isogeometric framework, capable of automatically performing an adaptive simulation in the context of second- and fourth-order, elliptic partial differential equations defined on two-dimensiona ...

We propose a cheaper version of a posteriori error estimator from Gorynina et al. (Namer. Anal. (2017)) for the linear second-order wave equation discretized by the Newmark scheme in time and by the finite element method in space. The new estimator preserv ...