Publication

Comparative study of the effects of three data-interpretation methodologies on the performance of geotechnical back analysis

Ian Smith
2020
Journal paper
Abstract

Back analysis can provide engineers with important information for better decision-making. Over the years, research on back analysis has focused mainly on optimisation techniques, while comparative studies of data-interpretation methodologies have seldom been reported. This paper examines the use of three data-interpretation methodologies on the performance of geotechnical back analysis. In general, there are two types of approaches for interpreting model predictions using field measurements, deterministic versus population-based, both of which are considered in this study. The methodologies that are compared are (a) error-domain model falsification (EDMF), (b) Bayesian model updating and (c) residual minimisation. Back analyses of an excavation case history in Singapore using the three methodologies indicate that each has strengths and limitations. Residual minimisation, though easy to implement, shows limited capabilities of interpreting measurement data with large uncertainty errors. EDMF provides robustness against incomplete information of the correlation structure. This is achieved at the expense of precision, as EDMF yields wider confidence intervals of the identified parameter values and predicted quantities compared with Bayesian model updating. In this regard, a modified EDMF implementation is proposed, which can improve upon the limitations of the traditional EDMF method, thus enhancing the quality of the identification outcomes.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related concepts (37)
Data
In common usage and statistics, data (USˈdætə; UKˈdeɪtə) is a collection of discrete or continuous values that convey information, describing the quantity, quality, fact, statistics, other basic units of meaning, or simply sequences of symbols that may be further interpreted formally. A datum is an individual value in a collection of data. Data is usually organized into structures such as tables that provide additional context and meaning, and which may themselves be used as data in larger structures.
Data analysis
Data analysis is the process of inspecting, cleansing, transforming, and modeling data with the goal of discovering useful information, informing conclusions, and supporting decision-making. Data analysis has multiple facets and approaches, encompassing diverse techniques under a variety of names, and is used in different business, science, and social science domains. In today's business world, data analysis plays a role in making decisions more scientific and helping businesses operate more effectively.
Standard error
The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM). The sampling distribution of a mean is generated by repeated sampling from the same population and recording of the sample means obtained. This forms a distribution of different means, and this distribution has its own mean and variance.
Show more
Related publications (71)

Design of an Open-Loop Pile-Oscillation Program in the CROCUS Reactor

Andreas Pautz, Vincent Pierre Lamirand, Thomas Jean-François Ligonnet, Axel Guy Marie Laureau

As a follow-up to the CEA-EPFL PETALE experimental program on stainless steel nuclear data, the EPFL initiated an open-loop pile-oscillation experimental program in the CROCUS reactor: BLOOM. A reproduction of the critical experiments of PETALE, the progra ...
2024

A probabilistic finite element method based on random meshes: A posteriori error estimators and Bayesian inverse problems

Assyr Abdulle, Giacomo Garegnani

We present a novel probabilistic finite element method (FEM) for the solution and uncertainty quantification of elliptic partial differential equations based on random meshes, which we call random mesh FEM (RM-FEM). Our methodology allows to introduce a pr ...
ELSEVIER SCIENCE SA2021

Validating model-based data interpretation methods for quantification of reserve capacity

Ian Smith, Sai Ganesh Sarvotham Pai

Optimal performance of civil infrastructure is an important aspect of liveable cities. A judicious combination of physics-based models with monitoring data in a validated methodology that accounts for uncertainties is explored in this paper. This methodolo ...
2021
Show more
Related MOOCs (32)
Selected Topics on Discrete Choice
Discrete choice models are used extensively in many disciplines where it is important to predict human behavior at a disaggregate level. This course is a follow up of the online course “Introduction t
Selected Topics on Discrete Choice
Discrete choice models are used extensively in many disciplines where it is important to predict human behavior at a disaggregate level. This course is a follow up of the online course “Introduction t
Digital Signal Processing I
Basic signal processing concepts, Fourier analysis and filters. This module can be used as a starting point or a basic refresher in elementary DSP
Show more

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.