Federated Learning with Intermediate Representation Regularization
October 28, 2022 ยท Entered Twilight ยท ๐ International Conference on Big Data and Smart Computing
Repo contents: 20221201_torch_env.yml, README.md, cka.py, dirichlet_data_distribution.ipynb, fedavg.ipynb, fedcka.ipynb, federated_learning.py, fedir.ipynb, fedprox.ipynb, model.py, moon.ipynb, training.py, utils.py
Authors
Ye Lin Tun, Chu Myaet Thwal, Yu Min Park, Seong-Bae Park, Choong Seon Hong
arXiv ID
2210.15827
Category
cs.LG: Machine Learning
Cross-listed
cs.AI
Citations
8
Venue
International Conference on Big Data and Smart Computing
Repository
https://github.com/YLTun/FedIntR
โญ 1
Last Checked
2 months ago
Abstract
In contrast to centralized model training that involves data collection, federated learning (FL) enables remote clients to collaboratively train a model without exposing their private data. However, model performance usually degrades in FL due to the heterogeneous data generated by clients of diverse characteristics. One promising strategy to maintain good performance is by limiting the local training from drifting far away from the global model. Previous studies accomplish this by regularizing the distance between the representations learned by the local and global models. However, they only consider representations from the early layers of a model or the layer preceding the output layer. In this study, we introduce FedIntR, which provides a more fine-grained regularization by integrating the representations of intermediate layers into the local training process. Specifically, FedIntR computes a regularization term that encourages the closeness between the intermediate layer representations of the local and global models. Additionally, FedIntR automatically determines the contribution of each layer's representation to the regularization term based on the similarity between local and global representations. We conduct extensive experiments on various datasets to show that FedIntR can achieve equivalent or higher performance compared to the state-of-the-art approaches. Our code is available at https://github.com/YLTun/FedIntR.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Machine Learning
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
R.I.P.
๐ป
Ghosted
Semi-Supervised Classification with Graph Convolutional Networks
R.I.P.
๐ป
Ghosted
Proximal Policy Optimization Algorithms
R.I.P.
๐ป
Ghosted