论文标题
OL4EL:在线学习,用于与资源限制的异质边缘上的协作学习
OL4EL: Online Learning for Edge-cloud Collaborative Learning on Heterogeneous Edges with Resource Constraints
论文作者
论文摘要
网络边缘的分布式机器学习(ML)是一个有希望的范式,可以保留数据提供商的网络带宽和隐私。但是,边缘服务器(或边缘)上的异质和有限的计算和通信资源在分布式ML上构成了巨大挑战,并制定了新的边缘学习范围(即Edge-Cloud协作机器学习)。在本文中,我们提出了一个“学习学习”的新框架,以实现有效的边缘学习(EL),并具有资源约束的异质边缘。我们首先将协作策略的动态确定(即,在协作服务过程中分配本地迭代,在协作学习过程中云上的全球聚集)作为在线优化问题,以实现EL性能与Edge服务器的资源消耗之间的权衡。然后,我们根据预算有限的多军强盗模型为EL(OL4EL)框架提出了一个在线学习。 OL4EL支持同步和异步学习模式,可用于监督和无监督的学习任务。为了评估OL4EL的性能,我们在基于Docker容器的基础上进行了现实测试床实验和广泛的模拟,在这些实验中,支持向量机和K-均值都被视为用例。实验结果表明,在学习绩效和资源消耗之间的权衡方面,OL4EL显着优于最先进的EL和其他协作ML方法。
Distributed machine learning (ML) at network edge is a promising paradigm that can preserve both network bandwidth and privacy of data providers. However, heterogeneous and limited computation and communication resources on edge servers (or edges) pose great challenges on distributed ML and formulate a new paradigm of Edge Learning (i.e. edge-cloud collaborative machine learning). In this article, we propose a novel framework of 'learning to learn' for effective Edge Learning (EL) on heterogeneous edges with resource constraints. We first model the dynamic determination of collaboration strategy (i.e. the allocation of local iterations at edge servers and global aggregations on the Cloud during collaborative learning process) as an online optimization problem to achieve the tradeoff between the performance of EL and the resource consumption of edge servers. Then, we propose an Online Learning for EL (OL4EL) framework based on the budget-limited multi-armed bandit model. OL4EL supports both synchronous and asynchronous learning patterns, and can be used for both supervised and unsupervised learning tasks. To evaluate the performance of OL4EL, we conducted both real-world testbed experiments and extensive simulations based on docker containers, where both Support Vector Machine and K-means were considered as use cases. Experimental results demonstrate that OL4EL significantly outperforms state-of-the-art EL and other collaborative ML approaches in terms of the trade-off between learning performance and resource consumption.