论文标题

E-CIR:事件增强的连续强度恢复

E-CIR: Event-Enhanced Continuous Intensity Recovery

论文作者

Song, Chen, Huang, Qixing, Bajaj, Chandrajit

论文摘要

当我们按下快门按钮的那一刻,相机开始感知光。在曝光间隔内,场景和相机之间的相对运动会导致运动模糊,这是一种常见的视觉伪像。本文介绍了E-CIR,它将模糊的图像转换为锐利的视频,该视频随时间到强度为参数函数。 E-CIR将事件作为辅助输入。我们讨论如何利用时间事件结构来构建参数碱基。我们演示了如何训练深度学习模型以预测功能系数。为了提高外观一致性,我们进一步介绍了一个改进模块,以在连续帧之间传播视觉特征。与最先进的事件增强的脱张方法相比,E-CIR会产生更光滑,更现实的结果。 E-CIR的实现可在https://github.com/chensong1995/e-cir上获得。

A camera begins to sense light the moment we press the shutter button. During the exposure interval, relative motion between the scene and the camera causes motion blur, a common undesirable visual artifact. This paper presents E-CIR, which converts a blurry image into a sharp video represented as a parametric function from time to intensity. E-CIR leverages events as an auxiliary input. We discuss how to exploit the temporal event structure to construct the parametric bases. We demonstrate how to train a deep learning model to predict the function coefficients. To improve the appearance consistency, we further introduce a refinement module to propagate visual features among consecutive frames. Compared to state-of-the-art event-enhanced deblurring approaches, E-CIR generates smoother and more realistic results. The implementation of E-CIR is available at https://github.com/chensong1995/E-CIR.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源