Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Accurate measurement-data interpretation leads to increased understanding of structural behavior and enhanced asset-management decision making. In this paper, four data-interpretation methodologies, residual minimization, traditional Bayesian model updating, modified Bayesian model updating (with an L∞-norm-based Gaussian likelihood function), and error-domain model falsification (EDMF), a method that rejects models that have unlikely differences between predictions and measurements, are compared. In the modified Bayesian model updating methodology, a correction is used in the likelihood function to account for the effect of a finite number of measurements on posterior probability–density functions. The application of these data-interpretation methodologies for condition assessment and fatigue life prediction is illustrated on a highway steel–concrete composite bridge having four spans with a total length of 219 m. A detailed 3D finite-element plate and beam model of the bridge and weigh-in-motion data are used to obtain the time–stress response at a fatigue critical location along the bridge span. The time–stress response, presented as a histogram, is compared to measured strain responses either to update prior knowledge of model parameters using residual minimization and Bayesian methodologies or to obtain candidate model instances using the EDMF methodology. It is concluded that the EDMF and modified Bayesian model updating methodologies provide robust prediction of fatigue life compared with residual minimization and traditional Bayesian model updating in the presence of correlated non-Gaussian uncertainty. EDMF has additional advantages due to ease of understanding and applicability for practicing engineers, thus enabling incremental asset-management decision making over long service lives. Finally, parallel implementations of EDMF using grid sampling have lower computations times than implementations using adaptive sampling.