论文标题
通用线性模型的易腐败算法
Corruption-tolerant Algorithms for Generalized Linear Models
论文作者
论文摘要
本文介绍了SVAM(顺序变异的MLE),这是一个在训练数据中学习逆转标签损坏下的通用线性模型的统一框架。 SVAM扩展到最小二乘回归,逻辑回归和伽马回归等任务,而许多现有的学习用标签腐败的研究仅集中在最小二乘回归上。 SVAM基于一种新颖的降低技术,该技术可能具有独立的兴趣,并通过迭代地求解了GLM目标的变异版本的加权MLE。 SVAM提供可证明的模型恢复,即使持续的培训标签受到对抗损坏,也可以保证可靠回归的最新回归。 SVAM在经验上还优于鲁棒回归和分类的几种现有特定问题的技术。 SVAM的代码可从https://github.com/purushottamkar/svam/获得
This paper presents SVAM (Sequential Variance-Altered MLE), a unified framework for learning generalized linear models under adversarial label corruption in training data. SVAM extends to tasks such as least squares regression, logistic regression, and gamma regression, whereas many existing works on learning with label corruptions focus only on least squares regression. SVAM is based on a novel variance reduction technique that may be of independent interest and works by iteratively solving weighted MLEs over variance-altered versions of the GLM objective. SVAM offers provable model recovery guarantees superior to the state-of-the-art for robust regression even when a constant fraction of training labels are adversarially corrupted. SVAM also empirically outperforms several existing problem-specific techniques for robust regression and classification. Code for SVAM is available at https://github.com/purushottamkar/svam/