Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?
Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur Graph Search.
The explosive growth of machine learning in the age of data has led to a new probabilistic and data-driven approach to solving very different types of problems. In this paper we study the feasibility of using such data-driven algorithms to solve classic physical and mathematical problems. In particular, we try to model the solution of an inverse continuum mechanics problem in the context of linear elasticity using deep neural networks. To better address the inverse function, we start first by studying the simplest related task,consisting of a building block of the actual composite problem. By empirically proving the learnability of simpler functions, we aim to draw conclusions with respect to the initial problem.The basic inverse problem that motivates this paper is that of a 2D plate with inclusion under specific loading and boundary conditions. From measurements at static equilibrium,we wish to recover the position of the hole. Although some analytical solutions have been formulated for 3D-infinite solids - most notably Eshelby’s inclusion problems - finite problems with particular geometries, material inhomogeneities, loading and boundary conditions require the use of numerical methods which are most often efficient solutions to the forward problem, the mapping from the parameter space to the measurement/signal space, i.e. in our case computing displacements and stresses knowing the size and position of the inclusion. Using numerical data generated from the well-defined forward problem,we train a neural network to approximate the inverse function relating displacements and stresses to the position of the inclusion. The preliminary results on the 2D-finite problem are promising, but the black-box nature of neural networks is a huge issue when it comes to understanding the solution.For this reason, we study a 3D-infinite continuous isotropic medium with unique concentrated load, for which the Green’s function gives an analytical mathematical expression relating relative position of the point force and the displacements in the solid. After de-riving the expression of the inverse, namely recovering the relative position of the force from the Green’s matrix computed at a given point in the medium, we are able to study the sensitivity of the inverse function. From both the expression of the Green’s function and its inverse, we highlight what issues might arise when training neural networks to solve the inverse problem. As the Green’s function is not bijective, bijection must been forced when training for regression. Moreover, due to displacements growing to infinity as we approach the singularity at zero, the training domain must be constrained to be some distance away from the singularity. As we train a neural network to fit the inverse of the Green’s function, we show that the input parameters should include the least possible redundant information to ensure the most efficient training.We then extend our analysis to two point forces. As more loads are added, bijection is harder to enforce as permutations of forces must be taken into account and more collisions may arise, i.e. multiple specific combinations of forces might yield the same measurements.One obvious solution is to increase the number of nodes where displacements are measured to limit the possibility of collision. Through new experiments, we show again that the best training is achieved for the least possible amount of nodes, as long as the training data generated is indeed bijective. As the medium is elastic, we propose a neural network architecture that matches the composite nature of the inverse problem. We also present another formulation of the problem which is invariant to permutations of the forces,namely multilabel classification, and yields good performance in the two-load case.Finally, we study the composite inverse function for 2, 3, 4 and 5 forces. By comparing training and accuracy for different neural network architectures, we expose the model easiest to train. Moreover, the evolution of the final accuracy as more loads are added indicates that deep-neural networks (DNNs) are not well suited to fit a non-linear mapping from and to a high-dimensional space. The results are more convincing for multilabel classification.
Marco Picasso, Alexandre Caboussat, Maude Girardin
Alexander Mathis, Alberto Silvio Chiappa, Alessandro Marin Vargas, Axel Bisi