论文标题

异步联合学习的综合数量减少,并且具有差异的隐私范围。

Asynchronous Federated Learning with Reduced Number of Rounds and with Differential Privacy from Less Aggregated Gaussian Noise

论文作者

van Dijk, Marten, Nguyen, Nhuong V., Nguyen, Toan N., Nguyen, Lam M., Tran-Dinh, Quoc, Nguyen, Phuong Ha

论文摘要

在网络通信方面,联合学习的可行性受到服务器网站基础架构的高度限制。大多数新启动的智能手机和IoT设备都配备了GPU或足够的计算硬件来运行强大的AI模型。但是,如果原始同步联合学习,则需要客户设备遭受等待时间和客户和服务器之间的定期通信。这意味着对本地模型培训时间以及不规则或错过的更新的敏感性更高,因此对大量客户的可扩展性和实时衡量的融合率更少或有限。我们为异步联合学习提出了一种新算法,该算法消除了等待时间并减少了整体网络通信 - 我们为强烈凸出目标功能提供了严格的理论分析并提供了模拟结果。通过添加高斯噪声,我们可以显示如何使算法差异化私有 - 新定理显示了如何显着降低汇总的添加的高斯噪声。

The feasibility of federated learning is highly constrained by the server-clients infrastructure in terms of network communication. Most newly launched smartphones and IoT devices are equipped with GPUs or sufficient computing hardware to run powerful AI models. However, in case of the original synchronous federated learning, client devices suffer waiting times and regular communication between clients and server is required. This implies more sensitivity to local model training times and irregular or missed updates, hence, less or limited scalability to large numbers of clients and convergence rates measured in real time will suffer. We propose a new algorithm for asynchronous federated learning which eliminates waiting times and reduces overall network communication - we provide rigorous theoretical analysis for strongly convex objective functions and provide simulation results. By adding Gaussian noise we show how our algorithm can be made differentially private -- new theorems show how the aggregated added Gaussian noise is significantly reduced.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源