Hardware accelerators for Deep Neural Networks (DNNs) that use reduced precision parameters are more energy efficient than the equivalent full precision networks. While many studies have focused on reduced precision training methods for supervised networks with the availability of large datasets, less work has been reported on incremental learning algorithms that adapt the network for new classes and the consequence of reduced precision has on these algorithms. This paper presents an empirical study of how reduced precision training methods impact the iCARL incremental learning algorithm. The incremental network accuracies on the CIFAR-100 image dataset show that weights can be quantized to 1 bit (2.39% drop in accuracy) but when activations are quantized to 1 bit, the accuracy drops much more (12.75%). Quantizing gradients from 32 to 8 bits only affects the accuracies of the trained network by less than 1%. These results are encouraging for hardware accelerators that support incremental learning algorithms.
Volkan Cevher, Grigorios Chrysos, Fanghui Liu
Romain Christophe Rémy Fleury, Janez Rus
Alexander Mathis, Alberto Silvio Chiappa, Alessandro Marin Vargas, Axel Bisi