论文标题
Fast-ITPN:具有令牌迁移的整合预先训练的变压器金字塔网络
Fast-iTPN: Integrally Pre-Trained Transformer Pyramid Network with Token Migration
论文作者
论文摘要
我们提出了综合训练的变压器金字塔网络(ITPN),以共同优化网络骨干和颈部,因此表示模型和下游任务之间的传递差距很小。 ITPN天生具有两个精心设计的设计:1)Vision Transformer(VIT)上的第一个预训练的特征金字塔。 2)使用蒙版特征建模(MFM)对特征金字塔进行多阶段监督。 ITPN已更新为快速ITPN,通过两个灵活的设计减少了计算内存开销并加速推断。 1)令牌迁移:放下主链的冗余令牌,同时在没有注意力操作的情况下在特征金字塔中补充它们。 2)令牌收集:通过引入几乎没有收集令牌来降低由全球关注引起的计算成本。 ImageNet-1k上的基础/大型快速ITPN实现了88.75%/89.5%的TOP-1精度。在使用Dino的1次培训计划下,基本/大型快速ITPN在可可对象检测上实现了58.4%/58.8%的盒子AP,并在使用MaskDino的ADE20K语义段上获得了57.5%/58.7%MIOU。 Fast-ITPN可以将推理程序加速高达70%,而绩效损失微不足道,这表明有可能成为下游视觉任务的强大骨干。该代码可在以下网址获得:github.com/sunsmarterjie/itpn。
We propose integrally pre-trained transformer pyramid network (iTPN), towards jointly optimizing the network backbone and the neck, so that transfer gap between representation models and downstream tasks is minimal. iTPN is born with two elaborated designs: 1) The first pre-trained feature pyramid upon vision transformer (ViT). 2) Multi-stage supervision to the feature pyramid using masked feature modeling (MFM). iTPN is updated to Fast-iTPN, reducing computational memory overhead and accelerating inference through two flexible designs. 1) Token migration: dropping redundant tokens of the backbone while replenishing them in the feature pyramid without attention operations. 2) Token gathering: reducing computation cost caused by global attention by introducing few gathering tokens. The base/large-level Fast-iTPN achieve 88.75%/89.5% top-1 accuracy on ImageNet-1K. With 1x training schedule using DINO, the base/large-level Fast-iTPN achieves 58.4%/58.8% box AP on COCO object detection, and a 57.5%/58.7% mIoU on ADE20K semantic segmentation using MaskDINO. Fast-iTPN can accelerate the inference procedure by up to 70%, with negligible performance loss, demonstrating the potential to be a powerful backbone for downstream vision tasks. The code is available at: github.com/sunsmarterjie/iTPN.