Skip to content

Super short project showing a potential attack against federated learning.

License

Notifications You must be signed in to change notification settings

ValerianRey/federated_learning_attack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

federated_learning_attack

Privacy attack from the point of view of a malicious server during the usual federated averaging algorithm with secure aggregation.

Basically the server gives a poisoned model to everyone except one targeted participant. The poisoned models leverage dead relus to be in a frozen state, so that the trained models given back by the clients are left unchanged. This allows the malicious server to extract the gradient of the targeted participant.

It might seem obvious but it shows that even under secure aggregation protocol, there is a form of trust that we need to place in the server.

About

Super short project showing a potential attack against federated learning.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages