Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?
Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur Graph Search.
Deep neural network (DNN) inference tasks are computationally expensive. Digital DNN accelerators offer better density and energy efficiency than general-purpose processors but still not sufficient to be deployable on resource-constrained settings. Analog computing is a promising alternative, but previously proposed circuits greatly suffer from fabrication variations. We observe that relaxing the requirement of having linear synapses enables circuits with higher density and more resilience to transistor mismatch. We also note that the training process offers an opportunity to address the non-ideality and non-reliability of analog circuits. In this work, we introduce a novel synapse circuit design that is dense and insensitive to transistor mismatch, and a novel training algorithm that helps train neural networks with non-ideal and non-reliable analog circuits. Compared to state-of-the-art digital and analog accelerators, our circuit achieves 29x and 582x better computational density, respectively.
,
Pascal Fua, Mathieu Salzmann, Krzysztof Maciej Lis, Sina Honari
Wulfram Gerstner, Stanislaw Andrzej Wozniak, Ana Stanojevic, Giovanni Cherubini, Angeliki Pantazi