论文标题
V2X-VIT:Vision Transformer的车辆到所有合作感
V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision Transformer
论文作者
论文摘要
在本文中,我们调查了车辆到所有(V2X)通信的应用,以提高自动驾驶汽车的感知性能。我们使用新型视觉变压器提供了一个与V2X通信的强大的合作感知框架。具体而言,我们建立了一个整体关注模型,即V2X-VIT,以有效地融合跨道路代理(即车辆和基础设施)的信息。 V2X-VIT由异质的多代理自我注意和多规模窗口自我注意的交替层组成,该层捕获了代理间的相互作用和全面的空间关系。这些关键模块在统一的变压器体系结构中设计,以应对常见的V2X挑战,包括异步信息共享,姿势错误和V2X组件的异质性。为了验证我们的方法,我们使用Carla和OpenCDA创建了一个大规模的V2X感知数据集。广泛的实验结果表明,V2X-Vit设置了3D对象检测的新最先进的性能,即使在恶劣,嘈杂的环境下,也可以实现强大的性能。该代码可在https://github.com/derrickxunu/v2x-vit上找到。
In this paper, we investigate the application of Vehicle-to-Everything (V2X) communication to improve the perception performance of autonomous vehicles. We present a robust cooperative perception framework with V2X communication using a novel vision Transformer. Specifically, we build a holistic attention model, namely V2X-ViT, to effectively fuse information across on-road agents (i.e., vehicles and infrastructure). V2X-ViT consists of alternating layers of heterogeneous multi-agent self-attention and multi-scale window self-attention, which captures inter-agent interaction and per-agent spatial relationships. These key modules are designed in a unified Transformer architecture to handle common V2X challenges, including asynchronous information sharing, pose errors, and heterogeneity of V2X components. To validate our approach, we create a large-scale V2X perception dataset using CARLA and OpenCDA. Extensive experimental results demonstrate that V2X-ViT sets new state-of-the-art performance for 3D object detection and achieves robust performance even under harsh, noisy environments. The code is available at https://github.com/DerrickXuNu/v2x-vit.