论文标题

强化学习授权的移动边缘计算6G边缘智能

Reinforcement Learning-Empowered Mobile Edge Computing for 6G Edge Intelligence

论文作者

Wei, Peng, Guo, Kun, Li, Ye, Wang, Jue, Feng, Wei, Jin, Shi, Ge, Ning, Liang, Ying-Chang

论文摘要

移动边缘计算(MEC)被认为是第五代(5G)网络及以后的计算密集型和延迟敏感任务的新型范式。但是,从移动设备,无线通道和边缘网络侧面,它的不确定性被称为动态和随机性,导致高维,非凸,非线性和NP-HARD优化问题。得益于进化的强化学习(RL),在与动态和随机环境进行迭代互动后,其训练有素的代理可以在MEC中智能获得最佳政策。此外,基于大规模的状态行动空间的参数近似,其进化版本(例如Deep RL(DRL))可以实现更高的收敛速度效率和学习精度。本文对支持RL的MEC进行了全面的研究评论,并为该领域的开发提供了见解。更重要的是,与自由移动性,动态渠道和分布式服务相关联,可以确定可以通过不同类型的RL算法解决的MEC挑战,然后是如何通过不同移动应用程序中的RL解决方案来解决的。最后,讨论了公开挑战,以为RL培训和学习MEC的未来研究提供有用的指导。

Mobile edge computing (MEC) is considered a novel paradigm for computation-intensive and delay-sensitive tasks in fifth generation (5G) networks and beyond. However, its uncertainty, referred to as dynamic and randomness, from the mobile device, wireless channel, and edge network sides, results in high-dimensional, nonconvex, nonlinear, and NP-hard optimization problems. Thanks to the evolved reinforcement learning (RL), upon iteratively interacting with the dynamic and random environment, its trained agent can intelligently obtain the optimal policy in MEC. Furthermore, its evolved versions, such as deep RL (DRL), can achieve higher convergence speed efficiency and learning accuracy based on the parametric approximation for the large-scale state-action space. This paper provides a comprehensive research review on RL-enabled MEC and offers insight for development in this area. More importantly, associated with free mobility, dynamic channels, and distributed services, the MEC challenges that can be solved by different kinds of RL algorithms are identified, followed by how they can be solved by RL solutions in diverse mobile applications. Finally, the open challenges are discussed to provide helpful guidance for future research in RL training and learning MEC.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源