论文标题

贝叶斯神经网络用于可逆隐身志

Bayesian Neural Networks for Reversible Steganography

论文作者

Chang, Ching-Chun

论文摘要

深度学习的最新进展导致了可逆隐身术领域的范式转变。可逆隐身志志的基本支柱是预测建模,可以通过深层神经网络实现。但是,关于某些分布和嘈杂数据的推论存在非平凡错误。鉴于这个问题,我们建议在基于贝叶斯深度学习的理论框架的预测模型中考虑不确定性,从而创建自适应隐志系统。大多数现代的深度学习模型被认为是确定性的,因为它们仅提供预测,而未能提供不确定性测量。贝叶斯神经网络为深度学习带来了概率的观点,可以被视为自我意识的智能机制。也就是说,一台知道自己的局限性的机器。为了量化不确定性,我们将贝叶斯统计数据应用于预测分布并通过随机前向通过的蒙特卡洛采样来对其进行建模。我们进一步表明,可以将预测性不确定性分解为息肉和认知的不确定性,并且这些数量可以不受监督。实验结果表明,贝叶斯的不确定性分析对造影率延伸性能进行了改善。

Recent advances in deep learning have led to a paradigm shift in the field of reversible steganography. A fundamental pillar of reversible steganography is predictive modelling which can be realised via deep neural networks. However, non-trivial errors exist in inferences about some out-of-distribution and noisy data. In view of this issue, we propose to consider uncertainty in predictive models based upon a theoretical framework of Bayesian deep learning, thereby creating an adaptive steganographic system. Most modern deep-learning models are regarded as deterministic because they only offer predictions while failing to provide uncertainty measurement. Bayesian neural networks bring a probabilistic perspective to deep learning and can be regarded as self-aware intelligent machinery; that is, a machine that knows its own limitations. To quantify uncertainty, we apply Bayesian statistics to model the predictive distribution and approximate it through Monte Carlo sampling with stochastic forward passes. We further show that predictive uncertainty can be disentangled into aleatoric and epistemic uncertainties and these quantities can be learnt unsupervised. Experimental results demonstrate an improvement delivered by Bayesian uncertainty analysis upon steganographic rate-distortion performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源