SHIELD – Secure Aggregation against Poisoning in Hierarchical Federated Learning

Federated Learning (FL) is a privacy-preserving distributed Machine Learning (ML) technique. Hierarchical FL is a novel variant of FL applicable to networks with multiple layers. Instead of transmitting client models to the server, hierarchical FL performs aggregations in the layers between the devices and the server. This further reduces the traffic toward the higher layers, which helps efficient link utilization. An adversary can manipulate a set of clients and send malicious model updates toward upper layers to create a trained model with a malicious objective. These attacks, also known as poisoning attacks, disrupt the model training. Like FL, Hierarchical FL is also vulnerable to poisoning attacks since the aggregators do not possess raw data. The existing robust algorithms are designed for FL systems with n clients and a server. Therefore, they are not effective against poisoning attacks in hierarchical FL systems. This paper proposes SHIELD, a novel robust aggregation technique that defends hierarchical FL systems from poisoning attacks. We evaluate SHIELD with several datasets in different application areas with different attack strategies and data distributions. The evaluation results demonstrate that SHIELD effectively defends hierarchical FL systems from poisoning attacks with a negligible impact on the benign performance of the models.

Siriwardhana Yushan, Porambage Pawani, Liyanage Madhusanka, Marchal Samuel, Ylianttila Mika

A1 Journal article (refereed), original research

, ,

Y. Siriwardhana, P. Porambage, M. Liyanage, S. Marchal and M. Ylianttila, "SHIELD - Secure Aggregation Against Poisoning in Hierarchical Federated Learning," in IEEE Transactions on Dependable and Secure Computing, vol. 22, no. 2, pp. 1845-1863, March-April 2025, doi: 10.1109/TDSC.2024.3472869

https://doi.org/10.1109/TDSC.2024.3472869 https://urn.fi/URN:NBN:fi:oulu-202410036166