论文标题
量化隐式神经表示
On Quantizing Implicit Neural Representations
论文作者
论文摘要
量化在隐式/坐标神经网络中的作用仍未完全了解。我们注意到,由于训练过程中的网络重量分布发生了变化,因此在训练过程中使用规范固定量化方案在较低利率下的性能差。在这项工作中,我们表明神经体重的不均匀量化会导致显着改善。具体而言,我们证明了群集量化可以改善重建。最后,通过表征量化和网络容量之间的权衡,我们证明使用二进制神经网络重建信号是可能的(而记忆效率低下)。我们在2D图像重建和3D辐射场上实验证明了我们的发现;并表明简单的量化方法和体系结构搜索可以将NERF的压缩达到小于16KB,而性能损失最小(比原始NERF小323倍)。
The role of quantization within implicit/coordinate neural networks is still not fully understood. We note that using a canonical fixed quantization scheme during training produces poor performance at low-rates due to the network weight distributions changing over the course of training. In this work, we show that a non-uniform quantization of neural weights can lead to significant improvements. Specifically, we demonstrate that a clustered quantization enables improved reconstruction. Finally, by characterising a trade-off between quantization and network capacity, we demonstrate that it is possible (while memory inefficient) to reconstruct signals using binary neural networks. We demonstrate our findings experimentally on 2D image reconstruction and 3D radiance fields; and show that simple quantization methods and architecture search can achieve compression of NeRF to less than 16kb with minimal loss in performance (323x smaller than the original NeRF).