Flare: Defensing Federated Learning against Model Poisoning Attacks via Latent Space Representations

Date:

This talk presents a robust aggregation algorithm FLARE to protect FL against MPAs. It demonstrates that PLR vector has high potentials in differentiating malicious/poisonous models from the benign ones. FLARE effectively minimizes the impact of malicious/poisonous models on the final aggregation by assigning low trust scores to those with diverging PLRs.