论文标题

Fedentropy:使用最大熵判决的有效设备进行联合学习的设备分组

FedEntropy: Efficient Device Grouping for Federated Learning Using Maximum Entropy Judgment

论文作者

Ling, Zhiwei, Yue, Zhihao, Xia, Jun, Hu, Ming, Wang, Ting, Chen, Mingsong

论文摘要

随着人工智能(AI)和Things Internet(IoT)的普及,联合学习(FL)也吸引了注意力不断提高,这是一个有希望的分布式机器学习范式,这使得在不揭示其隐私的情况下为众多分散设备进行了核心模型的培训。但是,由于涉及设备的数据分布有偏见,因此FL固有地遭受了非IID场景中的分类精度低。尽管已经提出了各种设备分组的方法来解决这个问题,但大多数忽略了i)i)异质设备的不同数据分布特征,ii)局部模型的贡献和危害,这对于确定全球模型聚合质量非常重要。在本文中,我们提出了一种具有新颖的动态设备分组方案的有效FL方法,该方法基于我们提议的最大熵判断启发式启发式启发式,该方法充分利用了上述两个因素。与现有的FL方法直接从所有选定的设备中返回的现有本地模型直接汇总了所有所选设备,一一圆形圆形的FedEntropy首先基于局部范围的范围,然后基于所选的柔和范围,该模型的整体范围构成了所选择的柔和范围,该模型构成了范围,该模型构成了众多的柔和范围。标签。如果不收集对聚合有害的本地模型,FedentRopy可以有效地提高全球模型的准确性,同时降低整体通信开销。关于众所周知的基准测试的全面实验结果表明,在模型的准确性和开销方面,FedentRopy不仅超过了最先进的FL方法,而且还可以将其集成到它们中以提高其分类性能。

Along with the popularity of Artificial Intelligence (AI) and Internet-of-Things (IoT), Federated Learning (FL) has attracted steadily increasing attentions as a promising distributed machine learning paradigm, which enables the training of a central model on for numerous decentralized devices without exposing their privacy. However, due to the biased data distributions on involved devices, FL inherently suffers from low classification accuracy in non-IID scenarios. Although various device grouping method have been proposed to address this problem, most of them neglect both i) distinct data distribution characteristics of heterogeneous devices, and ii) contributions and hazards of local models, which are extremely important in determining the quality of global model aggregation. In this paper, we present an effective FL method named FedEntropy with a novel dynamic device grouping scheme, which makes full use of the above two factors based on our proposed maximum entropy judgement heuristic.Unlike existing FL methods that directly aggregate local models returned from all the selected devices, in one FL round FedEntropy firstly makes a judgement based on the pre-collected soft labels of selected devices and then only aggregates the local models that can maximize the overall entropy of these soft labels. Without collecting local models that are harmful for aggregation, FedEntropy can effectively improve global model accuracy while reducing the overall communication overhead. Comprehensive experimental results on well-known benchmarks show that, FedEntropy not only outperforms state-of-the-art FL methods in terms of model accuracy and communication overhead, but also can be integrated into them to enhance their classification performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源