**Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?**

Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur GraphSearch.

Publication# Updated braking forces for the assessment of road bridges

Résumé

Safety assessments of road bridges to braking events combine the braking force, acting along the longitudinal axis of the deck, with a vertical load that accounts for the vertical component of the traffic action. In modern design standards the vertical load models result from probabilistic calibration procedures targeting predefined return periods. On the contrary, the braking force was derived from a deterministic characterization of the vehicle configurations and of the braking process. Therefore, the return period of the braking force is unclear and may not be consistent with that of the vertical load model. Significant deviations from the target return period might lead to either uneconomical decisions, e.g. uncalled-for retrofitting interventions, or to inaccurate structural safety verifications. This thesis presents an original stochastic model to compute site-specific values of the braking force as a function of the return period. The developed stochastic model takes into account the length of the bridge deck and its dynamic properties for vibrations in the longitudinal direction, as well as different sources of randomness related to braking events, all of which comply with real-world measurements, including: - vehicle configurations, resorting to a time-history of crossing vehicles; - driver response times, randomly generated from probability distributions defined in the scope of this project; - deceleration profiles of the vehicles, resampled from catalogues of realistic deceleration profiles. The stochastic model uses Monte Carlo simulation of braking events and computes the maximum of the dynamic response of the bridge to each event. The computed maxima are collected in an empirical distribution function of the braking force. In the end, the model returns the quantile of this distribution that is suitable for safety assessments. This value of braking force is specific to the bridge given properties, to the traffic characteristics, and to the target return period. An additional novelty of this research work is the estimation of a rate of occurrence on motorways of braking events per vehicle-distance travelled. This parameter enables the estimation of the period of time covered by the simulations of braking events as a function of traffic flow and of the total number of braking events simulated. This step is fundamental to determine the value of the braking force that has a given return period. The braking forces returned by the stochastic model show significant dependence on the bridge length, the natural vibration period of the deck in the longitudinal direction, and the number of directions of traffic on the deck. On the contrary, damping ratio, traffic on the fast-lane or on weekends, and an augmentation of traffic in 20% show no substantial influence on the braking force. Moreover, the two motorway locations considered as sources of traffic data, Denges and Monte Ceneri, both in Switzerland, yielded braking forces with similar magnitudes, despite the significant differences in traffic characteristics. Finally, the results compiled served to calibrate an updated braking force that depends explicitly on the parameters found relevant, as well as on the return period so that it can be adopted by different standards even if they enforce different safety targets. This updated expression evidences that the braking forces of current codes tend to be conservative and, hence, can be improved based on the findings of this project.

Official source

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Concepts associés

Chargement

Publications associées

Chargement

Publications associées (22)

Chargement

Chargement

Chargement

Concepts associés (25)

Processus stochastique

Un processus ou processus aléatoire (voir Calcul stochastique) ou fonction aléatoire (voir Probabilité) représente une évolution, discrète ou à temps continu, d'une variable aléatoire. Celle-ci inte

Circulation routière

thumb|Bouchon routier
La circulation routière est le déplacement réglementé des automobiles, d'autres véhicules ou des piétons; au sens large, sur une route, une autoroute ou tout autre type de voiri

Système anti-blocage des roues

Le système anti-blocage des roues, plus connu sous le sigle ABS (de l'Antiblockiersystem), est un système d'assistance au freinage utilisé sur les véhicules roulants, limitant le blocage des roues, e

Modern data storage systems are extremely large and consist of several tens or hundreds of nodes. In such systems, node failures are daily events, and safeguarding data from them poses a serious design challenge. The focus of this thesis is on the data reliability analysis of storage systems and, in particular, on the effect of different design choices and parameters on the system reliability. Data redundancy, in the form of replication or advanced erasure codes, is used to protect data from node failures. By storing redundant data across several nodes, the surviving redundant data on surviving nodes can be used to rebuild the data lost by the failed nodes if node failures occur. As these rebuild processes take a finite amount of time to complete, there exists a nonzero probability of additional node failures during rebuild, which eventually may lead to a situation in which some of the data have lost so much redundancy that they become irrecoverably lost from the system. The average time taken by the system to suffer an irrecoverable data loss, also known as the mean time to data loss (MTTDL), is a measure of data reliability that is commonly used to compare different redundancy schemes and to study the effect of various design parameters. The theoretical analysis of MTTDL, however, is a challenging problem for non-exponential real-world failure and rebuild time distributions and for general data placement schemes. To address this issue, a methodology for reliability analysis is developed in this thesis that is based on the probability of direct path to data loss during rebuild. The reliability analysis is detailed in the sense that it accounts for the rebuild times involved, the amounts of partially rebuilt data when additional nodes fail during rebuild, and the fact that modern systems use an intelligent rebuild process that will first rebuild the data having the least amount of redundancy left. Through rigorous arguments and simulations it is established that the methodology developed is well-suited for the reliability analysis of real-world data storage systems. Applying this methodology to data storage systems with different types of redundancy, various data placement schemes, and rebuild constraints, the effect of the design parameters on the system reliability is studied. When sufficient network bandwidth is available for rebuild processes, it is shown that spreading the redundant data corresponding to the data on each node across a higher number of other nodes and using a distributed and intelligent rebuild process will improve the system MTTDL. In particular, declustered placement, which corresponds to spreading the redundant data corresponding to each node equally across all other nodes of the system, is found to potentially have significantly higher MTTDL values than other placement schemes, especially for large storage systems. This implies that more reliable data storage systems can be designed merely by changing the data placement without compromising on the storage efficiency or performance. The effect of a limited network rebuild bandwidth on the system reliability is also analyzed, and it is shown that, for certain redundancy schemes, spreading redundant data across more number of nodes can actually have a detrimental effect on reliability. It is also shown that the MTTDL values are invariant in a large class of node failure time distributions with the same mean. This class includes the exponential distribution as well as the real-world distributions, such as Weibull or gamma. This result implies that the system MTTDL will not be affected if the failure distribution is changed to a corresponding exponential one with the same mean. This observation is also of great importance because it suggests that the MTTDL results obtained in the literature by assuming exponential node failure distributions may still be valid for real-world storage systems despite the fact that real-world failure distributions are non-exponential. In contrast, it is shown that the MTTDL is sensitive to the node rebuild time distribution. A storage system reliability simulator is built to verify the theoretical results mentioned above. The simulator is sufficiently complex to perform all required failure events and rebuild tasks in a storage system, to use real-world failure and rebuild time distributions for scheduling failures and rebuilds, to take into account partial rebuilds when additional node failures occur, and to simulate different data placement schemes and compare their reliability. The simulation results are found to match the theoretical predictions with high confidence for a wide range of system parameters, thereby validating the methodology of reliability analysis developed.

Magnetoencephalography (MEG) is an imaging technique used to measure the magnetic field outside the human head produced by the electrical activity inside the brain. The MEG inverse problem, identifying the location of the electrical sources from the magnetic signal measurements, is ill-posed, that is, there are an infinite number of mathematically correct solutions. Common source localization methods assume the source does not vary with time and do not provide estimates of the variability of the fitted model. Here, we reformulate the MEG inverse problem by considering time-varying locations for the sources and their electrical moments and we model their time evolution using a state space model. Based on our predictive model, we investigate the inverse problem by finding the posterior source distribution given the multiple channels of observations at each time rather than fitting fixed source parameters. Our new model is more realistic than common models and allows us to estimate the variation of the strength, orientation and position. We propose two new Monte Carlo methods based on sequential importance sampling. Unlike the usual MCMC sampling scheme, our new methods work in this situation without needing to tune a high-dimensional transition kernel which has a very high cost. The dimensionality of the unknown parameters is extremely large and the size of the data is even larger. We use Parallel Virtual Machine (PVM) to speed up the computation.

Olfactometer experiments are used to determine the effect of odours on the behaviour of organisms such as insects or nematodes, and typically result in data comprising many groups of small counts, overdispersed relative to the multinomial distribution. Overdispersion reflects a lack of independence or heterogeneity among individuals and can lead to statistics having larger variances than expected and possible losses of efficiency. In this thesis, some distributions which consist of generalisations of the multinomial distribution have been developed. These models are based on non-homogeneous Markov chain theory, take the overdispersion into account, and potentially provide a physical interpretation of the overdispersion seen in olfactometer data. Some inference aspects are considered, including comparison of the asymptotic relative efficiencies of three different sampling schemes. The fact that the empirical distributions well approximate the corresponding asymptotic distributions is checked. Observable differences in parameter estimates between data generated under different hypotheses are also studied. Finally, different models intended to shed light on various aspects of the data and/or the experiment procedure, are applied to three real olfactometer datasets.