Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
A critical component of a successful language generation pipeline is the decoding algorithm. However, the general principles that should guide the choice of a decoding algorithm re- main unclear. Previous works only compare decoding algorithms in narrow scenarios, and their findings do not generalize across tasks. We argue that the misalignment between the model’s likelihood and the task-specific notion of utility is the key factor to understanding the effectiveness of decoding algorithms. To struc- ture the discussion, we introduce a taxonomy of misalignment mitigation strategies (MMSs), providing a unifying view of decoding as a tool for alignment. The MMS taxonomy groups decoding algorithms based on their implicit assumptions about likelihood–utility misalign- ment, yielding general statements about their applicability across tasks. Specifically, by an- alyzing the correlation between the likelihood and the utility of predictions across a diverse set of tasks, we provide empirical evidence supporting the proposed taxonomy and a set of principles to structure reasoning when choos- ing a decoding algorithm. Crucially, our analy- sis is the first to relate likelihood-based decod- ing algorithms with algorithms that rely on ex- ternal information, such as value-guided meth- ods and prompting, and covers the most di- verse set of tasks to date. Code, data, and models are available at https://github.com/epfl- dlab/understanding-decoding.
, , , , , ,
,