Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
Many fields make use nowadays of machine learning (ML) enhanced applications for cost optimization, scheduling or forecasting, in- cluding the energy sector. However, these very ML algorithms consume a significant amount of energy, sometimes going against the whole purpose of their employment. To this day, solutions for an energy-efficient execution of these algorithms have not been addressed adequately. In this paper, we demonstrate the advantage of executing ML algorithms on mobile devices (ARM) over a stan- dard server machine (RISC), from the perspective of energy. To do so, we first propose a novel methodology to quantify the amount of energy consumed by an ML algorithm. Then, we compare the energy consumption of existing algorithms running on mobile de- vices and server machines. To motivate running ML algorithms on mobile devices, we also propose a new peer-to-peer personalized ML algorithm (P3) that shows better convergence properties than related works, and provably converging to a ball centered at a criti- cal point of a non-convex cost function, under mild assumptions. Most importantly, we show that running the P3 algorithm on mo- bile devices is extremely energy-efficient, consuming 2700x, 200x and 20x less energy than centralized learning algorithms for 10, 100, and 300 peers respectively. Finally, unlike centralized learning algorithms, the proposed P2P algorithm can generate personalized models, and does not have issues of single-point-of-failure nor data privacy. Thus, we give evidence on the supremacy of our proposed P3 algorithm over the other state-of-the-art centralized ML ones.