**Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?**

Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur GraphSearch.

Publication# Modélisation et analyse numérique de problèmes de réaction-diffusion provenant de la solidification d'alliages binaires

1999

Thèse EPFL

Thèse EPFL

Résumé

In this work, we are interested in elaborating models and numerical methods in order to study some phenomena which arise in the solidification of binary alloys. We propose two distinct models based on the mass and energy conservation laws as well as on classical thermodynamics. The first model is based on the irreversible process theory and therefore requires the computation of the entropy of the system to complete the description. To obtain this quantity, we develop a formalism and a method which yields numerical results. This construction is made so that the entropy function inherits some interesting mathematical properties from physical requirements. These properties allow us to elaborate and analyze mathematically an original scheme using a time discretization depending on a real parameter to solve the solidification problem. In particular, we are able to prove that the scheme is stable for all values of the time step if the parameter is chosen correctly. Furthermore, we give some numerical results to support the theory. The second model, called phase-field model, is used to describe dendrites' formation during the solidification of binary alloys. The nature of the problem constrains us to use very fine meshes in some physical regions. To reduce the number of discrete unknowns, we develop an adaptive mesh strategy based on an ad-hoc error estimator. We make numerical tests showing that the method converges on regular meshes and that the estimator is in good agreement with the true error. We also show that the mesh refinement strategy gives good results in academic and physical cases.

Source officielle

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Concepts associés (22)

Publications associées (11)

MOOCs associés (25)

Estimateur (statistique)

En statistique, un estimateur est une fonction permettant d'estimer un moment d'une loi de probabilité (comme son espérance ou sa variance). Il peut par exemple servir à estimer certaines caractéristiques d'une population totale à partir de données obtenues sur un échantillon comme lors d'un sondage. La définition et l'utilisation de tels estimateurs constitue la statistique inférentielle. La qualité des estimateurs s'exprime par leur convergence, leur biais, leur efficacité et leur robustesse.

Parameter

A parameter (), generally, is any characteristic that can help in defining or classifying a particular system (meaning an event, project, object, situation, etc.). That is, a parameter is an element of a system that is useful, or critical, when identifying the system, or when evaluating its performance, status, condition, etc. Parameter has more specific meanings within various disciplines, including mathematics, computer programming, engineering, statistics, logic, linguistics, and electronic musical composition.

Analyse numérique

L’analyse numérique est une discipline à l'interface des mathématiques et de l'informatique. Elle s’intéresse tant aux fondements qu’à la mise en pratique des méthodes permettant de résoudre, par des calculs purement numériques, des problèmes d’analyse mathématique. Plus formellement, l’analyse numérique est l’étude des algorithmes permettant de résoudre numériquement par discrétisation les problèmes de mathématiques continues (distinguées des mathématiques discrètes).

Analyse Numérique pour Ingénieurs

Ce cours contient les 7 premiers chapitres d'un cours d'analyse numérique donné aux étudiants bachelor de l'EPFL. Des outils de base sont décrits dans les chapitres 1 à 5. La résolution numérique d'éq

Analyse numérique pour ingénieurs

Ce cours contient les 7 premiers chapitres d'un cours d'analyse numérique donné aux étudiants bachelor de l'EPFL. Des outils de base sont décrits dans les chapitres 1 à 5. La résolution numérique d'éq

Analyse Numérique

Ce cours contient les 7 premiers chapitres d'un cours d'analyse numérique donné aux étudiants bachelor de l'EPFL. Des outils de base sont décrits dans les chapitres 1 à 5. La résolution numérique d'éq

High-order numerical solvers for conservation laws suffer from Gibbs phenomenon close to discontinuities, leading to spurious oscillations and a detrimental effect on the solution accuracy. A possible strategy to reduce their amplitude aims to add a suitable amount of artificial viscosity. Several models are available in the literature, which rely on the identification of a shock sensor, adding dissipation when the solution regularity is lost. The dependence on problem-specific parameters limits their performances. To solve this issue, in this thesis we propose a new technique based on artificial neural networks. In particular, we focus on the construction of multilayer perceptrons. Emphasis is given to the training phase, which is carried out using a robust dataset created using the classical models with optimal parameters. The online evaluation is then integrated in the numerical solver of the partial differential equation. Even though most of the effort is put in one-dimensional problems, an extension to a two-dimensional scenario is provided. Several numerical results are presented to demonstrate the capabilities of the network-based technique. Initially, we focus on one-dimensional scalar problems, where Burgers equation represents our first benchmark test. Then, we move to less simple cases, characterized by non-convex flux functions or involving multiple equations (compressible Euler equations). The same strategy is followed for multi-dimensional problems. In most of the cases, the proposed model is able to guarantee high accuracy in presence of smooth solutions and to capture discontinuities (shock and contact waves). In general, the results are comparable to (or better than) the classical models with properly tuned parameters. A final performance analysis is carried out.

2018Removing geometrical details from a complex domain is a classical operation in computer aided design for simulation and manufacturing. This procedure simplifies the meshing process, and it enables faster simulations with less memory requirements. However, depending on the partial differential equation that one wants to solve in the geometrical model of interest, removing some important geometrical features may greatly impact the solution accuracy. For instance, in solid mechanics simulations, such features can be holes or fillets near stress concentration regions. Unfortunately, the effect of geometrical simplification on the accuracy of the problem solution is often neglected, because its analysis is a time-consuming task that is often performed manually, based on the expertise of engineers. It is therefore important to have a better understanding of the effect of geometrical model simplification, also called defeaturing, to improve our control on the simulation accuracy along the design and analysis phase.In this thesis, we formalize the process of defeaturing, and we analyze its impact on the accuracy of solutions of some partial differential problems. To achieve this goal, we first precisely define the error between the problem solution defined in the exact geometry, and the one defined in the simplified geometry. Then, we introduce an a posteriori estimator of the energy norm of this error. This allows us to reliably and efficiently control the error coming from the addition or the removal of geometrical features. We subsequently consider a finite element approximation of the defeatured problem, and the induced numerical error is integrated to the proposed defeaturing error estimator. In particular, we address the special case of isogeometric analysis based on (truncated) hierarchical B-splines, in possibly trimmed and multipatch geometries. In this framework, we derive a reliable a posteriori estimator of the overall error, i.e., of the error between the exact solution defined in the exact geometry, and the numerical solution defined in the defeatured geometry.We then propose a two-fold adaptive strategy for analysis-aware defeaturing, which starts by considering a coarse mesh on a fully-defeatured computational domain. On the one hand, the algorithm performs classical finite element mesh refinements in a (partially) defeatured geometry. On the other hand, the strategy also allows for geometrical refinement. That is, at each iteration, the algorithm is able to choose which missing geometrical features should be added to the simplified geometrical model, in order to obtain a more accurate solution.Throughout the thesis, we validate the presented theory, the properties of the aforementioned estimators and the proposed adaptive strategies, thanks to an extensive set of numerical experiments.

Functional time series is a temporally ordered sequence of not necessarily independent random curves. While the statistical analysis of such data has been traditionally carried out under the assumption of completely observed functional data, it may well happen that the statistician only has access to a relatively low number of sparse measurements for each random curve. These discrete measurements may be moreover irregularly scattered in each curve's domain, missing altogether for some curves, and be contaminated by measurement noise. This sparse sampling protocol escapes from the reach of established estimators in functional time series analysis and therefore requires development of a novel methodology.
The core objective of this thesis is development of a non-parametric statistical toolbox for analysis of sparsely observed functional time series data. Assuming smoothness of the latent curves, we construct a local-polynomial-smoother based estimator of the spectral density operator producing a consistent estimator of the complete second order structure of the data. Moreover, the spectral domain recovery approach allows for prediction of latent curve data at a given time by borrowing strength from the estimated dynamic correlations in the entire time series across time. Further to predicting the latent curves from their noisy point samples, the method fills in gaps in the sequence (curves nowhere sampled), denoises the data, and serves as a basis for forecasting.
A classical non-parametric apparatus for encoding the dependence between a pair of or among a multiple functional time series, whether sparsely or fully observed, is the functional lagged regression model. This consists of a linear filter between the regressors time series and the response. We show how to tailor the smoother based estimators for the estimation of the cross-spectral density operators and the cross-covariance operators and, by means of spectral truncation and Tikhonov regularisation techniques, how to estimate the lagged regression filter and predict the response process.
The simulation studies revealed the following findings: (i) if one has freedom to design a sampling scheme with a fixed number of measurements, it is advantageous to sparsely distribute these measurements in a longer time horizon rather than concentrating over a shorter time horizon to achieve dense measurements in order to diminish the spectral density estimation error, (ii) the developed functional recovery predictor surpasses the static predictor not exploiting the temporal dependence, (iii) neither of the two considered regularisation techniques can, in general, dominate the other for the estimation in functional lagged regression models. The new methodologies are illustrated by applications to real data: the meteorological data revolving around the fair-weather atmospheric electricity measured in Tashkent, Uzbekistan, and at Wank mountain, Germany; and a case study analysing the dependence of the US Treasury yield curve on macroeconomic variables.
As a secondary contribution, we present a novel simulation method for general stationary functional time series defined through their spectral properties. A simulation study shows universality of such approach and superiority of the spectral domain simulation over the temporal domain in some situations.