Lecture

Numerical Encoding for DNN Accelerators

Description

This lecture covers the exponential growth of Deep Neural Networks (DNNs) and the challenges it poses in terms of computational cost and carbon emissions. It introduces novel numerical encoding schemes like Block Floating Point (BFP) and Hybrid Block Floating Point (HBFP) to optimize DNN accuracy and hardware efficiency. The presentation discusses the impact of HBFP on DNN accuracy, hardware benefits, and cross-optimization opportunities. It also delves into the problem setting of information advantage in a multi-player scenario and presents results on the structure of optimal decision rules, empirical distribution, scoring sets, and the optimal tradeoff. The lecture concludes with insights on writing research papers effectively.

About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.