Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Many recent works have shown that if a given signal admits a sufficiently sparse representation in a given dictionary, then this representation is recovered by several standard optimization algorithms, in particular the convex minimization approach. Here we investigate the related problem of infering the dictionary from training data, with an approach where -minimization is used as a criterion to select a dictionary. We restrict our analysis to basis learning and identify necessary / sufficient / necessary and sufficient conditions on ideal (not necessarily very sparse) coefficients of the training data in an ideal basis to guarantee that the ideal basis is a strict local optimum of the -minimization criterion among (not necessarily orthogonal) bases of normalized vectors. We illustrate these conditions on deterministic as well as toy random models in dimension two and highlight the main challenges that remain open by this preliminary theoretical results.
Michaël Unser, Shayan Aziznejad
,