Publication

Analysis-aware defeaturing of complex geometries

Abstract

Local modifications of a computational domain are often performed in order to simplify the meshing process and to reduce computational costs and memory requirements. However, removing geometrical features of a domain often introduces a non-negligible error in the solution of a differential problem in which it is defined. In this paper, we aim at generalizing the work from [1], in which an a posteriori estimator of the geometrical defeaturing error is derived for domains from which one geometrical feature is removed. More precisely, we study the case of domains containing an arbitrary number of distinct features, and we perform an analysis on Poisson's, linear elasticity, and Stokes' equations. We introduce a simple and computationally cheap a posteriori estimator of the geometrical defeaturing error, whose reliability and efficiency are rigorously proved, and we introduce a geometric refinement strategy that accounts for the defeaturing error: Starting from a fully defeatured geometry, the algorithm determines at each iteration step which features need to be added to the geometrical model to reduce the defeaturing error. These important features are then added to the (partially) defeatured geometrical model at the next iteration, until the solution attains a prescribed accuracy. A wide range of numerical experiments are finally reported to illustrate and validate this work.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related concepts (38)
Fixed-point iteration
In numerical analysis, fixed-point iteration is a method of computing fixed points of a function. More specifically, given a function defined on the real numbers with real values and given a point in the domain of , the fixed-point iteration is which gives rise to the sequence of iterated function applications which is hoped to converge to a point . If is continuous, then one can prove that the obtained is a fixed point of , i.e., More generally, the function can be defined on any metric space with values in that same space.
Arnoldi iteration
In numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of an iterative method. Arnoldi finds an approximation to the eigenvalues and eigenvectors of general (possibly non-Hermitian) matrices by constructing an orthonormal basis of the Krylov subspace, which makes it particularly useful when dealing with large sparse matrices. The Arnoldi method belongs to a class of linear algebra algorithms that give a partial result after a small number of iterations, in contrast to so-called direct methods which must complete to give any useful results (see for example, Householder transformation).
Bayes estimator
In estimation theory and decision theory, a Bayes estimator or a Bayes action is an estimator or decision rule that minimizes the posterior expected value of a loss function (i.e., the posterior expected loss). Equivalently, it maximizes the posterior expectation of a utility function. An alternative way of formulating an estimator within Bayesian statistics is maximum a posteriori estimation. Suppose an unknown parameter is known to have a prior distribution .
Show more
Related publications (70)

Analysis-aware defeaturing of complex geometries with Neumann features

Pablo Antolin Sanchez, Ondine Gabrielle Chanon

Local modifications of a computational domain are often performed in order to simplify the meshing process and to reduce computational costs and memory requirements. However, removing geometrical features of a domain often introduces a non-negligible error ...
Hoboken2023

A Shape Derivative Approach to Domain Simplification

Annalisa Buffa, Jochen Peter Hinz, Ondine Gabrielle Chanon, Alessandra Arrigoni

The objective of this study is to address the difficulty of simplifying the geometric model in which a differential problem is formulated, also called defeaturing, while simultaneously ensuring that the accuracy of the solution is maintained under control. ...
Oxford2023

Estimation of Self-Exciting Point Processes from Time-Censored Data

Thomas Alois Weber, Philipp Schneider

Self-exciting point processes, widely used to model arrival phenomena in nature and society, are often difficult to identify. The estimation becomes even more challenging when arrivals are recorded only as bin counts on a finite partition of the observatio ...
2023
Show more
Related MOOCs (32)
Introduction to optimization on smooth manifolds: first order methods
Learn to optimize on smooth, nonlinear spaces: Join us to build your foundations (starting at "what is a manifold?") and confidently implement your first algorithm (Riemannian gradient descent).
Warm-up for EPFL
Warmup EPFL est destiné aux nouvelles étudiantes et étudiants de l'EPFL.
Algebra (part 1)
Un MOOC francophone d'algèbre linéaire accessible à tous, enseigné de manière rigoureuse et ne nécessitant aucun prérequis.
Show more

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.