论文标题

珠穆朗玛峰:通过删除冗余时空令牌,有效的蒙版视频自动编码器

EVEREST: Efficient Masked Video Autoencoder by Removing Redundant Spatiotemporal Tokens

论文作者

Hwang, Sunil, Yoon, Jaehong, Lee, Youngwan, Hwang, Sung Ju

论文摘要

蒙版视频自动编码器(MVA)方法通过显着优于先前的视频表示方法来证明其潜力。但是,由于随机掩盖策略,它们在预测非信息令牌/帧时浪费了过多的计算和内存。 (例如,超过16个节点,具有128个NVIDIA A100 GPU)。为了解决这个问题,我们利用视频中贴片之间的不平等信息密度并提出了珠穆朗玛峰,这是一种令人惊讶的有效的MVA MVA方法,用于视频表示学习,发现包含丰富运动功能的代币并在预训练和细调过程中丢弃了非信息性。我们进一步提出了一种信息密集型框架选择策略,该策略使该模型可以关注具有最低冗余的信息和因果框架。我们的方法大大降低了MVA的计算和内存要求,从而在单台机器上实现了8 GPU的预训练和微调,同时可以在多个基准测试和未经固定的EGO4D数据集上实现与计算和内存较高的基准相当的性能。我们希望我们的工作有助于减少对视频理解的进一步研究的障碍。

Masked Video Autoencoder (MVA) approaches have demonstrated their potential by significantly outperforming previous video representation learning methods. However, they waste an excessive amount of computations and memory in predicting uninformative tokens/frames due to random masking strategies. (e.g., over 16 nodes with 128 NVIDIA A100 GPUs). To resolve this issue, we exploit the unequal information density among the patches in videos and propose EVEREST, a surprisingly efficient MVA approach for video representation learning that finds tokens containing rich motion features and discards uninformative ones during both pre-training and fine-tuning. We further present an information-intensive frame selection strategy that allows the model to focus on informative and causal frames with minimal redundancy. Our method significantly reduces the computation and memory requirements of MVA, enabling the pre-training and fine-tuning on a single machine with 8 GPUs while achieving comparable performance to computation- and memory-heavy baselines on multiple benchmarks and the uncurated Ego4D dataset. We hope that our work contributes to reducing the barrier to further research on video understanding.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源