论文标题
近端政策优化学习基于拥挤高速公路流量的控制
Proximal Policy Optimization Learning based Control of Congested Freeway Traffic
论文作者
论文摘要
这项研究提出了基于近端政策优化(PPO)强化学习的延迟补偿反馈控制器,通过操纵适应性巡航控制装备(ACC-ACC型)车辆的自适应巡航时间间隙来稳定拥挤状态的交通流量。偏微分方程(PDE)。由后台延迟延迟补偿器[18]启用,但与复杂的分段控制方案,PPO控制由三个反馈组成,即当前的交通流量,当前的交通流量速度和前一个步骤控制输入。从PPO和流量系统的数值模拟器之间的相互作用中学到了三个反馈的控制收益,而不知道系统动力学。设计数值模拟实验是为了比较Lyapunov控制,后退控制和PPO控件。结果表明,对于无延迟系统,PPO控制的收敛速度比Lyapunov控制更快,控制率更少。对于具有输入延迟的流量系统,即使对于延迟值不匹配的情况,PPO控制器的性能也可与后台控制器的性能相当。但是,PPO对参数扰动是可靠的,而后替式控制器无法稳定一个系统,其中一个参数被高斯噪声干扰。
This study proposes a delay-compensated feedback controller based on proximal policy optimization (PPO) reinforcement learning to stabilize traffic flow in the congested regime by manipulating the time-gap of adaptive cruise control-equipped (ACC-equipped) vehicles.The traffic dynamics on a freeway segment are governed by an Aw-Rascle-Zhang (ARZ) model, consisting of $2\times 2$ nonlinear first-order partial differential equations (PDEs).Inspired by the backstepping delay compensator [18] but different from whose complex segmented control scheme, the PPO control is composed of three feedbacks, namely the current traffic flow velocity, the current traffic flow density and previous one step control input. The control gains for the three feedbacks are learned from the interaction between the PPO and the numerical simulator of the traffic system without knowing the system dynamics. Numerical simulation experiments are designed to compare the Lyapunov control, the backstepping control and the PPO control. The results show that for a delay-free system, the PPO control has faster convergence rate and less control effort than the Lyapunov control. For a traffic system with input delay, the performance of the PPO controller is comparable to that of the Backstepping controller, even for the situation that the delay value does not match. However, the PPO is robust to parameter perturbations, while the Backstepping controller cannot stabilize a system where one of the parameters is disturbed by Gaussian noise.