论文标题
使用几乎轰炸的功能量化基于学习的自动驾驶控制的安全性
Quantifying Safety of Learning-based Self-Driving Control Using Almost-Barrier Functions
论文作者
论文摘要
对自动驾驶车辆的路径追踪控制可以从深度学习中受益,以应对长期存在的挑战,例如非线性和不确定性。但是,深度神经控制器缺乏安全保证,从而限制了其实际使用。我们提出了一种新的学习方法的新方法,该方法几乎是在神经控制器下为系统设置的前向不变设置的方法,以定量分析深神经控制器对路径跟踪的安全性。我们设计了基于抽样的学习程序,用于构建候选神经屏障功能,以及利用神经网络的鲁棒性分析的认证程序来确定完全满足屏障条件的区域。我们在学习和认证之间使用对抗性训练循环来优化几乎键入的功能。学习的障碍也可用于通过可及性分析来构建在线安全监视器。我们证明了我们方法在量化各种模拟环境中神经控制器安全性方面的有效性,从简单的运动学模型到具有高保真性车辆动力学模拟的TORCS模拟器。
Path-tracking control of self-driving vehicles can benefit from deep learning for tackling longstanding challenges such as nonlinearity and uncertainty. However, deep neural controllers lack safety guarantees, restricting their practical use. We propose a new approach of learning almost-barrier functions, which approximately characterizes the forward invariant set for the system under neural controllers, to quantitatively analyze the safety of deep neural controllers for path-tracking. We design sampling-based learning procedures for constructing candidate neural barrier functions, and certification procedures that utilize robustness analysis for neural networks to identify regions where the barrier conditions are fully satisfied. We use an adversarial training loop between learning and certification to optimize the almost-barrier functions. The learned barrier can also be used to construct online safety monitors through reachability analysis. We demonstrate effectiveness of our methods in quantifying safety of neural controllers in various simulation environments, ranging from simple kinematic models to the TORCS simulator with high-fidelity vehicle dynamics simulation.