Publication

Byzantine tolerant gradient descent for distributed machine learning with adversaries

Résumé

The present application concerns a computer-implemented method for training a machine learning model in a distributed fashion, using Stochastic Gradient Descent, SGD, wherein the method is performed by a first computer in a distributed computing environment and comprises performing a learning round, comprising broadcasting a parameter vector to a plurality of worker computers in the distributed computing environment, receiving an estimate update vector (gradient) from all or a subset of the worker computers, wherein each received estimate vector is either an estimate of a gradient of a cost function, or an erroneous vector, and determining an updated parameter vector for use in a next learning round based only on a subset of the received estimate vectors. The method aggregates the gradients while guaranteeing resilience to up to half workers being compromised (malfunctioning, erroneous or modified by attackers).

À propos de ce résultat
Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.