Caching is an approach to smoothen the variability of traffic over time. Recently it has been proved that the local memories at the users can be exploited for reducing the peak traffic in a much more efficient way than previously believed. In this work we improve upon the existing results and introduce a novel caching strategy that takes advantage of simultaneous coded placement and coded delivery in order to decrease the worst case achievable rate with files and users. We will show that for any cache size our scheme outperforms the state of the art.
David Atienza Alonso, Marina Zapater Sancho, Luis Maria Costero Valero, Darong Huang, Qunyou Liu
Anastasia Ailamaki, Hamish Mcniece Hill Nicholson, Periklis Chrysogelos
David Atienza Alonso, Giovanni Ansaloni, Alireza Amirshahi, Joshua Alexander Harrison Klein