Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
According to the proposed Artificial Intelligence Act by the European Comission (expected to pass at the end of 2023), the class of High-Risk AI Systems (Title III) comprises several important applications of Deep Learning like autonomous driving vehicles or robot-assisted surgery, which rely on supervised learning with image data. According to Article 15 in the aforementioned legal framework, such systems must be resilient to errors, faults or inconsistencies that may occur within the the environment, and to attempts by unauthorised third parties to alter their performance by exploiting the systemvulnerabilities. Non-compliance can result in fines and a forced withdrawal from the market for infringing products and companies.In this work, we develop theory and algorithms to train and certify robust Deep Neural Networks. Our theoretical results and proposed algorithms provide resiliency in different scenarios like the presence of adversarial perturbations or injection of random noise in the input features. In this way, our framework allows compliance with the requirements of the AI Act, and is a step towards a safe rollout of High-Risk AI systems based on Deep Learning.To summarize, the main contributions of this Ph.D. thesis are: (I) first algorithm for certifying the robustness of Deep Neural Networks with the use of Polynomial Optimization, by upper bounding their Lipschitz constant (II) first algorithm with guarantees for performing 1-path-norm regularization for Shallow Networks, and proof of its relation with the robustness to adversarial perturbations, (III) extension of 1-path-norm regularization methods to Deep Neural Networks, (IV) first generalization bounds and robustness analysis of Deep Polynomial Networks, and a novel regularization scheme to improve their robustness, (V) first theoretically correct descent method for Adversarial Training, the most common algorithm for training robust networks, (VI) first theoretically correct formulation ofAdversarial Training as a bilevel optimization problem, which provides a solution of the robust overfitting phenomenon, (VII) ADMM algorithm with guarantees of fast convergence, for the problem of Denoising Adversarial Examples using Generative Adversarial Networks as a prior and (VIII) an explicit regularization scheme for Quadratic Neural Networks with guaranteed improvement in the robustness torandom noise, compared to SVMs.
Edoardo Charbon, Claudio Bruschini, Andrei Ardelean, Paul Mos, Yang Lin