论文标题
用于口语视频接地的视频指导课程学习
Video-Guided Curriculum Learning for Spoken Video Grounding
论文作者
论文摘要
在本文中,我们介绍了一项新任务,即口语视频接地(SVG),旨在将口语描述中所需的视频片段定位。与使用文本相比,使用音频需要模型直接利用与原始语音视频相关的有用音素和音节。此外,我们在语音音频中随机添加环境噪音,进一步增加了此任务的困难,并更好地模拟了真实的应用程序。为了纠正歧视性音素并从嘈杂的音频中提取与视频相关的信息,我们在音频预训练过程中开发了一种新颖的视频指导课程学习(VGCL),可以利用重要的视觉感知来帮助理解口语并抑制外部噪音。考虑到推断期间,模型无法获得地面真实视频片段,我们设计了一种课程策略,该策略将输入视频从地面真相转移到预训练期间的整个视频内容。最后,该模型可以学习如何从整个视频剪辑中提取关键的视觉信息,以帮助了解口语。此外,我们基于ActivityNet收集了第一个大规模口语视频接地数据集,该数据集称为ActivityNet语音数据集。广泛的实验表明,我们提出的视频指导课程学习可以促进培训过程以获得相互的音频编码器,从而大大促进了口头视频接地任务的性能。此外,我们证明,在嘈杂的声音的情况下,我们的模型优于将视频与ASR转录本接地的方法,进一步证明了我们课程策略的有效性。
In this paper, we introduce a new task, spoken video grounding (SVG), which aims to localize the desired video fragments from spoken language descriptions. Compared with using text, employing audio requires the model to directly exploit the useful phonemes and syllables related to the video from raw speech. Moreover, we randomly add environmental noises to this speech audio, further increasing the difficulty of this task and better simulating real applications. To rectify the discriminative phonemes and extract video-related information from noisy audio, we develop a novel video-guided curriculum learning (VGCL) during the audio pre-training process, which can make use of the vital visual perceptions to help understand the spoken language and suppress the external noise. Considering during inference the model can not obtain ground truth video segments, we design a curriculum strategy that gradually shifts the input video from the ground truth to the entire video content during pre-training. Finally, the model can learn how to extract critical visual information from the entire video clip to help understand the spoken language. In addition, we collect the first large-scale spoken video grounding dataset based on ActivityNet, which is named as ActivityNet Speech dataset. Extensive experiments demonstrate our proposed video-guided curriculum learning can facilitate the pre-training process to obtain a mutual audio encoder, significantly promoting the performance of spoken video grounding tasks. Moreover, we prove that in the case of noisy sound, our model outperforms the method that grounding video with ASR transcripts, further demonstrating the effectiveness of our curriculum strategy.