Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?
Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur Graph Search.
We investigate the statistical and algorithmic properties of random neural-network generative priors in a simple inference problem: spiked-matrix estimation. We establish a rigorous expression for the performance of the Bayes-optimal estimator in the high-dimensional regime, and identify the statistical threshold for weak-recovery of the spike. Next, we derive a message-passing algorithm taking into account the latent structure of the spike, and show that its performance is asymptotically optimal for natural choices of the generative network architecture. The absence of an algorithmic gap in this case is in stark contrast to known results for sparse spikes, another popular prior for modelling low-dimensional signals, and for which no algorithm is known to achieve the optimal statistical threshold. Finally, we show that linearising our message passing algorithm yields a simple spectral method also achieving the optimal threshold for reconstruction. We conclude with an experiment on a real data set showing that our bespoke spectral method outperforms vanilla PCA.