Flare: Defensing Federated Learning against Model Poisoning Attacks via Latent Space Representations
Talk, AsiaCCS 2022, Recorded, the talk is shared at the conference in Nagasaki, Japan
This talk presents a robust aggregation algorithm FLARE to protect FL against MPAs. It demonstrates that PLR vector has high potentials in differentiating malicious/poisonous models from the benign ones. FLARE effectively minimizes the impact of malicious/poisonous models on the final aggregation by assigning low trust scores to those with diverging PLRs.