Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Near-term quantum devices can be used to build quantum machine learning models, such as quantum kernel methods and quantum neural networks (QNN), to perform classification tasks. There have been many proposals on how to use variational quantum circuits as quantum perceptrons or as QNNs. The aim of this work is to introduce a teacher-student scheme that could systematically compare any QNN architectures and evaluate their relative expressive power. Specifically, the teacher model generates the datasets mapping random inputs to outputs which then have to be learned by the student models. This way, we avoid training on arbitrary data sets and allow to compare the learning capacity of different models directly via the loss, the prediction map, the accuracy and the relative entropy between the prediction maps. Here, we focus particularly on a quantum perceptron model inspired by the recent work of Tacchino et al. (2019) and compare it to the data re-uploading scheme that was originally introduced by Perez-Salinas et al. (2020). We discuss alterations of the perceptron model and the formation of deep QNN to better understand the role of hidden units and the non-linearities in these architectures.
Lenka Zdeborová, Elisabetta Cornacchia, Bruno Loureiro, Bruno Loureiro, Francesca Mignacco