论文标题
目标安装自动编码器用于监督代表学习
Target-Embedding Autoencoders for Supervised Representation Learning
论文作者
论文摘要
基于自动编码器的学习已成为在无监督和半监督的设置中进行纪念表示的主食。本文分析了在目标空间高维空间的纯监督环境中改善概括的框架。我们激励和形式化了目标插入自动编码器(TEA)的一般框架进行监督预测,学习共同优化的中间潜在表示,既可以从特征中可以预测,又可以预测目标 - 编码以前的目标变化是由底层下层因素的紧凑型集合驱动的。作为我们的理论贡献,我们通过证明统一的稳定性,将辅助重建任务的益处解释为正规化形式来提供线性茶的概括。作为我们的经验贡献,我们将这种方法的验证扩展到了现有的静态分类应用程序之外,以预测多变量序列,验证它们在线性和非线性复发体系结构上的优势,从而强调了该框架的进一步通用性,而不是前馈实例。
Autoencoder-based learning has emerged as a staple for disciplining representations in unsupervised and semi-supervised settings. This paper analyzes a framework for improving generalization in a purely supervised setting, where the target space is high-dimensional. We motivate and formalize the general framework of target-embedding autoencoders (TEA) for supervised prediction, learning intermediate latent representations jointly optimized to be both predictable from features as well as predictive of targets---encoding the prior that variations in targets are driven by a compact set of underlying factors. As our theoretical contribution, we provide a guarantee of generalization for linear TEAs by demonstrating uniform stability, interpreting the benefit of the auxiliary reconstruction task as a form of regularization. As our empirical contribution, we extend validation of this approach beyond existing static classification applications to multivariate sequence forecasting, verifying their advantage on both linear and nonlinear recurrent architectures---thereby underscoring the further generality of this framework beyond feedforward instantiations.