Publication

Adversarial Analytics

Viet Anh Nguyen
2019
EPFL thesis
Abstract

Adversarial learning is an emergent technique that provides better security to machine learning systems by deliberately protecting them against specific vulnerabilities of the learning algorithms. Many adversarial learning problems can be cast equivalently as distributionally robust optimization problems that hedge against the least favorable probability distribution in a certain ambiguity set.

The main objectives of this thesis center around the development of novel analytics toolboxes using advanced probability and statistics machinery under the distributionally robust optimization/adversarial learning framework. Using a type-2 Wasserstein ambiguity set and its Gelbrich hull, which constitutes a conservative outer approximation, we propose new solutions with strong performance guarantees to several problems in statistical learning and risk management, while at the same time mitingating the curse of dimensionality inherent to these problems.

The first chapter proposes a distributionally robust inverse covariance estimator that minimizes the worst-case Stein's loss. The optimal estimator admits a closed-form representation and exhibits many desirable properties, none of which are imposed ad hoc but arise naturally from the distributionally robust optimization approach. The optimal estimator is closely related to a nonlinear eigenvalue shrinkage estimator. For this reason we refer to it as the Wasserstein shrinkage estimator. Furthermore, the Wasserstein shrinkage estimator can also be interpreted as a robust maximum likelihood estimator.

The second chapter proposes a distributionally robust minimum mean square error estimator. Under a mild assumption on the nominal distribution of the uncertain data, we show that the optimal estimator is an affine function of the observations, which can be constructed efficiently using a first-order optimization method to solve the underlying semidefinite program.

The third chapter studies distributionally robust risk measures under the Gelbrich hull ambiguity set, which is an outer approximation of the Wasserstein type-2 ambiguity set. We prove that the robustified Gelbrich risk of many popular law-invariant risk measures admit a closed form expression. The result is extended to provide tractable reformulations for the worst-case expected loss as well as the value-at-risk of nonlinear portfolios.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.