Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
In this paper, we revisit an efficient algorithm for noisy group testing in which each item is decoded separately (Malyutov and Mateev, 1980), and develop novel performance guarantees via an information-theoretic framework for general noise models. For the noiseless and symmetric noise models, we find that the asymptotic number of tests required for vanishing error probability is within a factor log 2 ≈ 0.7 of the informationtheoretic optimum at low parsity levels, and that when a small fraction of incorrectly-decoded items is allowed, this guarantee extends to all sublinear sparsity levels. In many scaling regimes, these are the best known theoretical guarantees for any noisy group testing algorithm.
Rachid Guerraoui, Nirupam Gupta, John Stephan, Sadegh Farhadkhani, Le Nguyen Hoang, Rafaël Benjamin Pinot
Martin Jaggi, Sebastian Urban Stich, Tao Lin, Lingjing Kong