论文标题
胸部X射线中关键发现的定位,而无需使用多个局部学习的本地注释
Localization of Critical Findings in Chest X-Ray without Local Annotations Using Multi-Instance Learning
论文作者
论文摘要
自动检测胸部X射线(CXR)(例如气胸)的关键发现,对于协助放射学家的临床工作流程(例如分三亚敏感性病例)和筛选偶然发现很重要。尽管深度学习(DL)模型已成为一种有前途的预测技术,但它们通常会遭受缺乏解释性的困扰,这是在受高度监管的医疗保健行业中DL模型临床部署的重要方面。例如,在图像中定位关键发现对于解释DL分类算法的预测很有用。尽管有许多用于计算机视觉的联合分类和本地化方法,但最新的DL模型需要以像素级标签或边界盒坐标的形式进行本地注释的培训数据。在医疗领域中,这需要医学专家为每个关键发现提供昂贵的手动注释。这一要求成为可以快速扩展到各种发现的训练模型的主要障碍。在这项工作中,我们通过基于多构想学习的可解释的DL算法来解决这些缺陷,该算法共同对CXR进行了共同分类和本地化的发现,而无需局部注释。我们在三个不同CXR数据集的三个不同关键发现(气胸,肺炎和肺水肿)上显示了竞争性分类结果。
The automatic detection of critical findings in chest X-rays (CXR), such as pneumothorax, is important for assisting radiologists in their clinical workflow like triaging time-sensitive cases and screening for incidental findings. While deep learning (DL) models has become a promising predictive technology with near-human accuracy, they commonly suffer from a lack of explainability, which is an important aspect for clinical deployment of DL models in the highly regulated healthcare industry. For example, localizing critical findings in an image is useful for explaining the predictions of DL classification algorithms. While there have been a host of joint classification and localization methods for computer vision, the state-of-the-art DL models require locally annotated training data in the form of pixel level labels or bounding box coordinates. In the medical domain, this requires an expensive amount of manual annotation by medical experts for each critical finding. This requirement becomes a major barrier for training models that can rapidly scale to various findings. In this work, we address these shortcomings with an interpretable DL algorithm based on multi-instance learning that jointly classifies and localizes critical findings in CXR without the need for local annotations. We show competitive classification results on three different critical findings (pneumothorax, pneumonia, and pulmonary edema) from three different CXR datasets.