论文标题
实时联邦进化神经建筑搜索
Real-time Federated Evolutionary Neural Architecture Search
论文作者
论文摘要
联合学习是一种分布式的机器学习方法,用于保护隐私,两项主要的技术挑战阻止了更广泛的联合学习应用。其中之一是联合学习提高了对通信的高度要求,因为必须在服务器和客户端之间传输大量模型参数。另一个挑战是,培训大型机器学习模型,例如联合学习中的深神经网络,需要大量的计算资源,这对于手机等边缘设备(例如手机)可能是不现实的。当在联邦学习中进行深层神经体系结构搜索时,问题就会变得更糟。为了应对上述挑战,我们提出了一种进化方法,用于实时联合神经体系结构搜索,不仅可以优化模型性能,还可以减少本地有效载荷。在搜索过程中,引入了双重采样技术,其中每个人将主模型的随机采样子模型传输到许多随机抽样的客户端,以进行培训而无需重新定位。这样,我们有效地降低了进化优化所需的计算和通信成本,并避免了本地模型的巨大性能波动,从而使所提出的框架非常适合实时联合联盟的神经体系结构搜索。
Federated learning is a distributed machine learning approach to privacy preservation and two major technical challenges prevent a wider application of federated learning. One is that federated learning raises high demands on communication, since a large number of model parameters must be transmitted between the server and the clients. The other challenge is that training large machine learning models such as deep neural networks in federated learning requires a large amount of computational resources, which may be unrealistic for edge devices such as mobile phones. The problem becomes worse when deep neural architecture search is to be carried out in federated learning. To address the above challenges, we propose an evolutionary approach to real-time federated neural architecture search that not only optimize the model performance but also reduces the local payload. During the search, a double-sampling technique is introduced, in which for each individual, a randomly sampled sub-model of a master model is transmitted to a number of randomly sampled clients for training without reinitialization. This way, we effectively reduce computational and communication costs required for evolutionary optimization and avoid big performance fluctuations of the local models, making the proposed framework well suited for real-time federated neural architecture search.