Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
In the celebrated coded caching problem studied by Maddah-Ali and Niesen, the peak-traffic network load is to be reduced by first caching some information about contents into individual memories of end users during the off-peak hours and then upon user requests broadcasting some other information about the contents, which, combined with cached information, can let each user recover their requested content. Thus, information-theoretic studies of coded caching involve the optimal tradeoff among communication rates for the two phases of cache placement and content delivery, and the optimal construction of codes for cache placement and content delivery. In order to allow better utilization of network resources, this paper introduces a new caching model in which user requests can arise at any point of time during the cache placement phase, and proposes a successive refinement approach as an answer to this dynamic caching problem. For uniformly random file requests, the optimal tradeoff among average-case delivery rates are characterized when the cache rate is above a well-defined threshold. For arbitrary file requests, a successive caching algorithm is developed to simultaneously reduce worst-case delivery rates at every request time, which are uniformly within a constant multiplicative factor of their respective optima.
Rachid Guerraoui, Antoine Murat, Javier Picorel Obando, Athanasios Xygkis
Anastasia Ailamaki, Periklis Chrysogelos, Hamish Mcniece Hill Nicholson