论文标题

W-NET:通过扩展U-NET并结合超声RF波形数据,对超声图像中皮下组织的密集语义分割

W-Net: Dense Semantic Segmentation of Subcutaneous Tissue in Ultrasound Images by Expanding U-Net to Incorporate Ultrasound RF Waveform Data

论文作者

Gare, Gautam Rajendrakumar, Li, Jiayuan, Joshi, Rohan, Vaze, Mrunal Prashant, Magar, Rishikesh, Yousefpour, Michael, Rodriguez, Ricardo Luis, Galeotti, John Micheal

论文摘要

我们提出了W-NET,这是一种新型的卷积神经网络(CNN)框架,该框架采用了每个A扫描中的原始超声波形,通常称为超声射频(RF)数据,除了灰色超声图像对语义上的段和标记组织。与先前的工作不同,我们试图在不使用背景类的情况下标记图像中的每个像素。据我们所知,这也是第一种进行分割的深入学习或CNN方法,可以分析超声原始RF数据以及灰色图像。国际专利等待[PCT/US20/37519]。我们选择皮下组织(subq)分割是我们的最初临床目标,因为它具有不同的混合组织,对细分的挑战,并且是代表性不足的研究领域。 SUBQ的潜在应用包括整形手术,脂肪干细胞收集,淋巴监测以及可能对某些类型的肿瘤进行检测/治疗。由专家临床医生和受训者组成的定制数据集用于实验,目前标记为以下类别:皮肤,脂肪,脂肪,脂肪筋膜/基质,肌肉,肌肉和肌肉筋膜。我们将我们的结果与U-NET和注意力NET进行了比较。与常规的U-NET和注意力U-NET相比,我们的小说\ emph {W-net}的RF波动输入和体系结构分别提高了MIOU准确性(在所有组织类别中平均)和4.9%。我们介绍了为什么肌肉筋膜和脂肪筋膜/基质是最难标记的组织。尤其是肌肉筋膜,是人类和AI算法都识别的最困难的解剖学类别,分别从我们的W-NET与U-NET和注意力NET的MIOU改善了13 \%和16 \%。

We present W-Net, a novel Convolution Neural Network (CNN) framework that employs raw ultrasound waveforms from each A-scan, typically referred to as ultrasound Radio Frequency (RF) data, in addition to the gray ultrasound image to semantically segment and label tissues. Unlike prior work, we seek to label every pixel in the image, without the use of a background class. To the best of our knowledge, this is also the first deep-learning or CNN approach for segmentation that analyses ultrasound raw RF data along with the gray image. International patent(s) pending [PCT/US20/37519]. We chose subcutaneous tissue (SubQ) segmentation as our initial clinical goal since it has diverse intermixed tissues, is challenging to segment, and is an underrepresented research area. SubQ potential applications include plastic surgery, adipose stem-cell harvesting, lymphatic monitoring, and possibly detection/treatment of certain types of tumors. A custom dataset consisting of hand-labeled images by an expert clinician and trainees are used for the experimentation, currently labeled into the following categories: skin, fat, fat fascia/stroma, muscle and muscle fascia. We compared our results with U-Net and Attention U-Net. Our novel \emph{W-Net}'s RF-Waveform input and architecture increased mIoU accuracy (averaged across all tissue classes) by 4.5\% and 4.9\% compared to regular U-Net and Attention U-Net, respectively. We present analysis as to why the Muscle fascia and Fat fascia/stroma are the most difficult tissues to label. Muscle fascia in particular, the most difficult anatomic class to recognize for both humans and AI algorithms, saw mIoU improvements of 13\% and 16\% from our W-Net vs U-Net and Attention U-Net respectively.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源