论文标题

通过多资源公平的雾计算的在线任务调度

Online Task Scheduling for Fog Computing with Multi-Resource Fairness

论文作者

Bian, Simeng, Huang, Xi, Shao, Ziyu

论文摘要

在雾计算系统中,一个关键挑战是在线任务调度,即决定从最终设备不断生成的任务的资源分配。由于雾计算系统中表现出各种不确定性,该设计具有挑战性。例如,任务的资源需求在其实际到达之前仍未知。最近的作品应用了深入的强化学习(DRL)技术来进行在线任务计划并改善各种目标。但是,他们忽略了不同任务的多元资源公平性,这是在任务之间实现公平资源共享的关键,但总的来说是非凡的。因此,设计具有多资源公平性的在线任务调度方案仍然是一个开放的问题。在本文中,我们解决了上述挑战。特别是,通过利用DRL技术并采用主导资源公平的想法(DRF),我们提出了Fairts,这是一种在线任务调度方案,该计划直接从经验中学习,以有效地缩短平均任务放缓,同时确保任务之间的多元资源公平性。仿真结果表明,Fairt的表现优于最先进的计划,其任务放缓和更好的资源公平性。

In fog computing systems, one key challenge is online task scheduling, i.e., to decide the resource allocation for tasks that are continuously generated from end devices. The design is challenging because of various uncertainties manifested in fog computing systems; e.g., tasks' resource demands remain unknown before their actual arrivals. Recent works have applied deep reinforcement learning (DRL) techniques to conduct online task scheduling and improve various objectives. However, they overlook the multi-resource fairness for different tasks, which is key to achieving fair resource sharing among tasks but in general non-trivial to achieve. Thusly, it is still an open problem to design an online task scheduling scheme with multi-resource fairness. In this paper, we address the above challenges. Particularly, by leveraging DRL techniques and adopting the idea of dominant resource fairness (DRF), we propose FairTS, an online task scheduling scheme that learns directly from experience to effectively shorten average task slowdown while ensuring multi-resource fairness among tasks. Simulation results show that FairTS outperforms state-of-the-art schemes with an ultra-low task slowdown and better resource fairness.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源