Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?
Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur Graph Search.
In this paper, we revisit an efficient algorithm for noisy group testing in which each item is decoded separately (Malyutov and Mateev, 1980), and develop novel performance guarantees via an information-theoretic framework for general noise models. For the noiseless and symmetric noise models, we find that the asymptotic number of tests required for vanishing error probability is within a factor log 2 ≈ 0.7 of the informationtheoretic optimum at low parsity levels, and that when a small fraction of incorrectly-decoded items is allowed, this guarantee extends to all sublinear sparsity levels. In many scaling regimes, these are the best known theoretical guarantees for any noisy group testing algorithm.
Martin Jaggi, Sebastian Urban Stich, Tao Lin, Lingjing Kong
Rachid Guerraoui, Nirupam Gupta, John Stephan, Sadegh Farhadkhani, Le Nguyen Hoang, Rafaël Benjamin Pinot