Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Deep neural network (DNN) inference tasks are computationally expensive. Digital DNN accelerators offer better density and energy efficiency than general-purpose processors but still not sufficient to be deployable on resource-constrained settings. Analog computing is a promising alternative, but previously proposed circuits greatly suffer from fabrication variations. We observe that relaxing the requirement of having linear synapses enables circuits with higher density and more resilience to transistor mismatch. We also note that the training process offers an opportunity to address the non-ideality and non-reliability of analog circuits. In this work, we introduce a novel synapse circuit design that is dense and insensitive to transistor mismatch, and a novel training algorithm that helps train neural networks with non-ideal and non-reliable analog circuits. Compared to state-of-the-art digital and analog accelerators, our circuit achieves 29x and 582x better computational density, respectively.
Wulfram Gerstner, Stanislaw Andrzej Wozniak, Ana Stanojevic, Giovanni Cherubini, Angeliki Pantazi
Pascal Fua, Mathieu Salzmann, Krzysztof Maciej Lis, Sina Honari
Michaël Unser, Alexis Marie Frederic Goujon