论文标题
学习场景文本检测的强大功能表示
Learning Robust Feature Representations for Scene Text Detection
论文作者
论文摘要
在过去的几年中,基于深层神经网络的场景检测取得了长足的进步。但是,在处理具有挑战性的公共基准时,以前的最先进方法仍然可能缺乏,因为算法的性能取决于网络体系结构中的鲁棒特征提取和组件。为了解决这个问题,我们将提出一个从损失中得出的网络体系结构,以通过与适当的后端优化下限,以最大程度地提高条件日志样式,在几种生成模型中显示出令人印象深刻的性能。此外,通过将潜在变量的层扩展到多层,该网络能够在没有特定任务的正则化或数据扩展的情况下大规模学习稳健的功能。我们提供详细的分析,并在三个公共基准数据集上显示结果,以确认所提出算法的效率和可靠性。在实验中,所提出的算法在召回和精度方面显着超过了最先进的方法。具体来说,它分别在2011年ICDAR和ICDAR 2013上实现了95.12和96.78的H均值。
Scene text detection based on deep neural networks have progressed substantially over the past years. However, previous state-of-the-art methods may still fall short when dealing with challenging public benchmarks because the performances of algorithm are determined by the robust features extraction and components in network architecture. To address this issue, we will present a network architecture derived from the loss to maximize conditional log-likelihood by optimizing the lower bound with a proper approximate posterior that has shown impressive performance in several generative models. In addition, by extending the layer of latent variables to multiple layers, the network is able to learn robust features on scale with no task-specific regularization or data augmentation. We provide a detailed analysis and show the results on three public benchmark datasets to confirm the efficiency and reliability of the proposed algorithm. In experiments, the proposed algorithm significantly outperforms state-of-the-art methods in terms of both recall and precision. Specifically, it achieves an H-mean of 95.12 and 96.78 on ICDAR 2011 and ICDAR 2013, respectively.