Towards privacy-preserving and fairness-aware federated learning framework
Résumé
Federated Learning (FL) enables the distributed training of a model across multiple data owners under the orchestration of a central server responsible for aggregating the models generated by the different clients. However, the original approach of FL has significant shortcomings related to privacy and fairness requirements. Specifically, the observation of the model updates may lead to privacy issues, such as membership inference attacks, while the use of imbalanced local datasets can introduce or amplify classification biases, especially for minority groups. In this work, we show that these biases can be exploited to increase the likelihood of privacy attacks against these groups. To do so, we propose a novel inference attack exploiting the knowledge of group fairness metrics during the training of the global model. Then to thwart this attack, we define a fairness-aware encrypted-domain aggregation algorithm that is differentially-private by design thanks to the approximate precision loss of the threshold multi-key CKKS homomorphic encryption scheme. Finally, we demonstrate the good performance of our proposal both in terms of fairness and privacy through experiments conducted over three real datasets.
Origine | Fichiers éditeurs autorisés sur une archive ouverte |
---|