**Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?**

Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur GraphSearch.

Publication# Seismic Vulnerability Assessment at Large Scale – City of Geneva

Résumé

In order to evaluate the seismic vulnerability at large scale, it is necessary to gain awareness of the soil properties, types of materials used in the construction of buildings during different periods, the construction standards of each period, different techniques of construction, the epicentre of the earthquake, etc. An ideal evaluation would take into account all the mentioned variables above; however, in this study only the building typologies are considered (Classification according to EMS-98). More precisely, the typologies are identified for a small data set manually. Each building possesses four attributes such as: 1) construction period, 2) number of floors, 3) surface of the building, and 4) Roof shape. In order to obtain the attributes above, data sets from Statistic Federal Office (OFS) and territorial information system in Geneva (SITG) were acquired. In addition, these data which embodied many imprecisions as well as irregularities were examined and treated. Furthermore, machine learning techniques are applied and will result in the acquisition of the building typologies for the whole city of Geneva. These machine learning methods are trained on a learning set collected manually, and then applied to the testing set which corresponds to the whole attribute combinations of Geneva. All in the interest of obtaining the corresponding typologies.

Official source

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Concepts associés

Chargement

Publications associées

Chargement

Concepts associés (14)

Construction

vignette|upright|Les grues sont essentielles pour des travaux importants tels que les gratte-ciel.
La construction est le fait d'assembler différents éléments d'un édifice en utilisant des matériaux

Apprentissage automatique

L'apprentissage automatique (en anglais : machine learning, « apprentissage machine »), apprentissage artificiel ou apprentissage statistique est

Bâtiment (construction)

Un bâtiment au sens commun est une construction immobilière, réalisée par intervention humaine, destinée d'une part à servir d'abri, c'est-à-dire à protéger des intempéries des personnes, des biens e

Publications associées (40)

Chargement

Chargement

Chargement

Learning to embed data into a space where similar points are together and dissimilar points are far apart is a challenging machine learning problem. In this dissertation we study two learning scenarios that arise in the context of learning embeddings and one scenario in efficiently estimating an empirical expectation. We present novel algorithmic solutions and demonstrate their applications on a wide range of data-sets.
The first scenario deals with learning from small data with large number of classes. This setting is common in computer vision problems such as person re-identification and face verification. To address this problem we present a new algorithm called Weighted Approximate Rank Component Analysis (WARCA), which is scalable, robust, non-linear and is independent of the number of classes. We empirically demonstrate the performance of our algorithm on 9 standard person re-identification data-sets where we obtain state of the art performance in terms of accuracy as well as computational speed.
The second scenario we consider is learning embeddings from sequences. When it comes to learning from sequences, recurrent neural networks have proved to be an effective algorithm. However there are many problems with existing recurrent neural networks which makes them data hungry (high sample complexity) and difficult to train. We present a new recurrent neural network called Kronecker Recurrent Units (KRU), which addresses the issues of existing recurrent neural networks through Kronecker matrices. We show its performance on 7 applications, ranging from problems in computer vision, language modeling, music modeling and speech recognition.
Most of the machine learning algorithms are formulated as minimizing an empirical expectation over a finite collection of samples. In this thesis we also investigate the problem of efficiently estimating a weighted average over large data-sets. We present a new data-structure called Importance Sampling Tree (IST), which permits fast estimation of weighted average without looking at all the samples. We show successfully the evaluation of our data-structure in the training of neural networks in order to efficiently find informative samples.

Sai Praneeth Reddy Karimireddy

A traditional machine learning pipeline involves collecting massive amounts of data centrally on a server and training models to fit the data. However, increasing concerns about the privacy and security of user's data, combined with the sheer growth in the data sizes has incentivized looking beyond such traditional centralized approaches. Collaborative learning (which encompasses distributed, federated, and decentralized learning) proposes instead for a network of data holders to collaborate together to train models without transmitting any data. This new paradigm minimizes data exposure, but inherently faces some fundamental challenges. In this thesis, we bring to bear the framework of stochastic optimization to formalize and develop new algorithms for these challenges. This serves not only to develop novel solutions, but also to test the utility of the optimization lens in modern deep learning.We study three fundamental problems. Firstly, collaborative training replaces a one-time transmission of raw data with repeated rounds of communicating partially trained models. However, this quickly runs against bandwidth constraints when dealing with large models. We propose to solve this bandwidth constraint using compressed communication. Next, collaborative training leverages the computation power of the data holders directly. However, this is not as reliable as using a data center with only a subset of them available at any given time. Thus, we require new algorithms which can efficiently utilize unreliable local computation of the data holders. Finally, collaborative training allows any data holder to participate in the training process, without being able to inspect their data or local computation. This may potentially open the system to malicious or faulty agents who seek to derail the training. We develop algorithms with Byzantine robustness which are guaranteed to be resilient to such attackers.

The assessment of the risk maps for the seismic vulnerability at large scale is based on the vulnerability of each building. In order to determine these vulnerabilities, it is first required to assign to each building its construction class. The construction class is needed to define the seismic behavior of the building. Since the structures have been built in different times, this construction classes are different between them, and their vulnerability values have been pre-defined in order to simplify the analysis. If, for a small city the task of assign to each building its construction class can be done by hand, in a big city, with thousands of buildings, this process can result long and hard. In this Master project the possibility of assigning the construction classes by using a machine learning method will be studied. This method use as a data set of the city of Lausanne from previous works which contains around 1000 buildings analyzed by hand with their construction classes. The goal of this method is to find the relationships between the attributes of the buildings to their construction classes. Thanks to the statistical offices, it is possible to obtain the attributes in an automatic way for every building in a city. Based on the relations that the machine learning method finds (and using the statistical attributes) the construction class for every building can finally be determined, in a quick and efficient manner. Once obtained the vulnerability’s values for each building, the risk maps can be drawn. In this project there will be studied three maps, two with the Europeans’ typologies (LM1, LM2), and one with the Swiss’ typologies (UniGE). Some comparisons will be made between the maps, in order to highlight the differences. Some other comparisons will be made to show the impact of the new micro zonation of Lausanne and of the new optimized method for the determination of the seismic displacement demand (N2 optimized method) on the maps. This is done for the mechanical methods (LM2 and UniGE) only. Finally, there will be shown the relationships between the soil’s characteristics (micro zonation), the building’s attributes and the city’s historical evolution with the risk maps.

2020