Banner Banner

FedXDS: Leveraging Model Attribution Methods to counteract Data Heterogeneity in Federated Learning

Maximilian Andreas Hoefler
Karsten Mueller
Wojciech Samek

October 19, 2025

Explainable AI (XAI) methods have demonstrated signif icant success in recent years at identifying relevant fea tures in input data that drive deep learning model deci sions, enhancing interpretability for users. However, the potential of XAI beyond providing model transparency has remained largely unexplored in adjacent machine learning domains. In this paper, we show for the first time how XAI can be utilized in the context of federated learning. Specifically, while federated learning enables collaborative model training without raw data sharing, it suffers from performance degradation when client data distributions ex hibit statistical heterogeneity. We introduce FedXDS (Fed erated Learning via XAI-guided Data Sharing), the first approach to utilize feature attribution techniques to iden tify precisely which data elements should be selectively shared between clients to mitigate heterogeneity. By em ploying propagation-based attribution, our method identi f ies task-relevant features through a single backward pass, enabling selective data sharing that aligns client contribu tions. To protect sensitive information, we incorporate met ric privacy techniques that provide formal privacy guaran tees while preserving utility. Experimental results demon strate that our approach consistently achieves higher ac curacy and faster convergence compared to existing meth ods across varying client numbers and heterogeneity set tings. We provide theoretical privacy guarantees and em pirically demonstrate robustness against both membership inference and feature inversion attacks. Code is availabe at https://github.com/MaxH1996/FedXDS.