论文标题
淋巴结转移的可解释的分类
Evolved Explainable Classifications for Lymph Node Metastases
论文作者
论文摘要
提出了一种新型的可解释人工智能进化方法:“进化的解释”模型(EVEX)。该方法包括将局部可解释的模型不可思议的解释(LIME)与多目标遗传算法相结合,以便在图像分类任务中进行自动分割参数调整。在这种情况下,所研究的数据集是补丁 - 梅利翁,由病理学的斑块组成,整个幻灯片图像。在该数据集上对公开可用的卷积神经网络(CNN)进行了训练,以提供二元分类,以在存在/不存在淋巴结转移组织。反过来,分类是通过不断发展的分段来解释的,试图同时优化三个评估目标。最终的解释是根据发达的遗传算法演变而来的所有解释的平均值。为了提高解释的可重复性和可追溯性,它们都是由几种不同种子随机选择的。观察到的结果显示不同种子之间的显着一致。尽管石灰解释的随机性质,但高解释权重的区域被证明在热图中具有很好的一致性,如像素相对标准偏差所计算的那样。发现的热图与专家医学分割一致,这表明该方法可以找到高质量的解释(根据评估指标),并具有自动参数微调的新优势。这些结果可以进一步了解医疗数据的神经网络黑匣子决策的内部工作。
A novel evolutionary approach for Explainable Artificial Intelligence is presented: the "Evolved Explanations" model (EvEx). This methodology consists in combining Local Interpretable Model Agnostic Explanations (LIME) with Multi-Objective Genetic Algorithms to allow for automated segmentation parameter tuning in image classification tasks. In this case, the dataset studied is Patch-Camelyon, comprised of patches from pathology whole slide images. A publicly available Convolutional Neural Network (CNN) was trained on this dataset to provide a binary classification for presence/absence of lymph node metastatic tissue. In turn, the classifications are explained by means of evolving segmentations, seeking to optimize three evaluation goals simultaneously. The final explanation is computed as the mean of all explanations generated by Pareto front individuals, evolved by the developed genetic algorithm. To enhance reproducibility and traceability of the explanations, each of them was generated from several different seeds, randomly chosen. The observed results show remarkable agreement between different seeds. Despite the stochastic nature of LIME explanations, regions of high explanation weights proved to have good agreement in the heat maps, as computed by pixel-wise relative standard deviations. The found heat maps coincide with expert medical segmentations, which demonstrates that this methodology can find high quality explanations (according to the evaluation metrics), with the novel advantage of automated parameter fine tuning. These results give additional insight into the inner workings of neural network black box decision making for medical data.