Exploiting Label Skews in Federated Learning with Model Concatenation
December 11, 2023 ยท Entered Twilight ยท ๐ AAAI Conference on Artificial Intelligence
Repo contents: LICENSE, README.md, config.py, datasets.py, experiments-FeSEM.py, experiments-FedSoft.py, experiments-IFCA.py, experiments-fedconcat-id.py, experiments-init-bias.py, experiments.py, kmeanslimited.py, mnist_generate.py, model.py, models, requirements.txt, resnetcifar.py, run.sh, utils.py, vggmodel.py
Authors
Yiqun Diao, Qinbin Li, Bingsheng He
arXiv ID
2312.06290
Category
cs.LG: Machine Learning
Citations
37
Venue
AAAI Conference on Artificial Intelligence
Repository
https://github.com/sjtudyq/FedConcat
โญ 14
Last Checked
2 months ago
Abstract
Federated Learning (FL) has emerged as a promising solution to perform deep learning on different data owners without exchanging raw data. However, non-IID data has been a key challenge in FL, which could significantly degrade the accuracy of the final model. Among different non-IID types, label skews have been challenging and common in image classification and other tasks. Instead of averaging the local models in most previous studies, we propose FedConcat, a simple and effective approach that concatenates these local models as the base of the global model to effectively aggregate the local knowledge. To reduce the size of the global model, we adopt the clustering technique to group the clients by their label distributions and collaboratively train a model inside each cluster. We theoretically analyze the advantage of concatenation over averaging by analyzing the information bottleneck of deep neural networks. Experimental results demonstrate that FedConcat achieves significantly higher accuracy than previous state-of-the-art FL methods in various heterogeneous label skew distribution settings and meanwhile has lower communication costs. Our code is publicly available at https://github.com/sjtudyq/FedConcat.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Machine Learning
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
R.I.P.
๐ป
Ghosted
Semi-Supervised Classification with Graph Convolutional Networks
R.I.P.
๐ป
Ghosted
Proximal Policy Optimization Algorithms
R.I.P.
๐ป
Ghosted