论文标题

重新考虑自我监督模型驱动的MRI重建的优化过程

Rethinking the optimization process for self-supervised model-driven MRI reconstruction

论文作者

Huang, Weijian, Li, Cheng, Fan, Wenxin, Zhou, Yongjin, Liu, Qiegen, Zheng, Hairong, Wang, Shanshan

论文摘要

从不足采样的测量中恢复高质量的图像对于加速的MRI重建至关重要。最近,已经开发了各种基于深度学习的MRI重建方法。尽管实现了有希望的表演,但这些方法需要完全采样的参考数据,而这些参考数据是资源密集型且耗时的。自我监督的学习已成为一种有希望的解决方案,以减轻对完全采样数据集的依赖。但是,由于在未采样的数据点上执行的约束不足,现有的自我监管方法遭受重建错误,并且错误积累的误差与迭代图像重建过程有关模型驱动的深度学习重建过程。为了应对这些挑战,我们提出了K2Calibrate,这是一种自我监管模型驱动的MR重建优化的K空间适应策略。通过迭代校准学习的测量值,K2Calibrate可以减少网络的重建降低,这是由于统计上依赖的噪声引起的。在开源数据集FastMRI上进行了广泛的实验,而K2Calibrate比五种最先进的方法获得了更好的结果。提出的K2Calibrate是插件,可以轻松地与不同的模型驱动的深度学习重建方法集成。

Recovering high-quality images from undersampled measurements is critical for accelerated MRI reconstruction. Recently, various supervised deep learning-based MRI reconstruction methods have been developed. Despite the achieved promising performances, these methods require fully sampled reference data, the acquisition of which is resource-intensive and time-consuming. Self-supervised learning has emerged as a promising solution to alleviate the reliance on fully sampled datasets. However, existing self-supervised methods suffer from reconstruction errors due to the insufficient constraint enforced on the non-sampled data points and the error accumulation happened alongside the iterative image reconstruction process for model-driven deep learning reconstrutions. To address these challenges, we propose K2Calibrate, a K-space adaptation strategy for self-supervised model-driven MR reconstruction optimization. By iteratively calibrating the learned measurements, K2Calibrate can reduce the network's reconstruction deterioration caused by statistically dependent noise. Extensive experiments have been conducted on the open-source dataset FastMRI, and K2Calibrate achieves better results than five state-of-the-art methods. The proposed K2Calibrate is plug-and-play and can be easily integrated with different model-driven deep learning reconstruction methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源