论文标题

逐步学习CT图像中的多个器官

Learning Incrementally to Segment Multiple Organs in a CT Image

论文作者

Liu, Pengbo, Wang, Xia, Fan, Mengsi, Pan, Hongli, Yin, Minmin, Zhu, Xiaohong, Du, Dandan, Zhao, Xiaoying, Xiao, Li, Ding, Lian, Wu, Xingwang, Zhou, S. Kevin

论文摘要

存在大量用于器官分割的数据集,这些数据集被部分注释并顺序构建。通过策划医学图像和注释感兴趣的器官来构建典型的数据集。换句话说,随着时间的推移,建立了具有新器官类别的注释的新数据集。为了释放这些部分标记的,依次构造的数据集的潜力,我们建议逐步学习多器官分割模型。在每个增量学习阶段(IL)阶段,我们将失去对先前数据和注释的访问,这些数据和注释被当前模型捕获,并获得了具有新器官类别注释的新数据集的访问,我们从中我们学会更新器官分割模型以包括新器官。尽管IL在自然图像分析的背景下以其“灾难性遗忘”的弱点而臭名昭著,但我们从实验上发现,这种弱点主要消失在CT多器官分段中。为了进一步稳定整个IL阶段的模型性能,我们引入了一个光记忆模块和一些损失功能,以限制特征空间中不同类别的表示,从而汇总了同一类的特征表示形式,并分离了不同类别的特征表示。进行了五个开源数据集的广泛实验,以说明我们方法的有效性。

There exists a large number of datasets for organ segmentation, which are partially annotated and sequentially constructed. A typical dataset is constructed at a certain time by curating medical images and annotating the organs of interest. In other words, new datasets with annotations of new organ categories are built over time. To unleash the potential behind these partially labeled, sequentially-constructed datasets, we propose to incrementally learn a multi-organ segmentation model. In each incremental learning (IL) stage, we lose the access to previous data and annotations, whose knowledge is assumingly captured by the current model, and gain the access to a new dataset with annotations of new organ categories, from which we learn to update the organ segmentation model to include the new organs. While IL is notorious for its `catastrophic forgetting' weakness in the context of natural image analysis, we experimentally discover that such a weakness mostly disappears for CT multi-organ segmentation. To further stabilize the model performance across the IL stages, we introduce a light memory module and some loss functions to restrain the representation of different categories in feature space, aggregating feature representation of the same class and separating feature representation of different classes. Extensive experiments on five open-sourced datasets are conducted to illustrate the effectiveness of our method.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源