论文标题
在无限差异假设下可靠的稳健统计学习的非反应保证
Non-Asymptotic Guarantees for Robust Statistical Learning under Infinite Variance Assumption
论文作者
论文摘要
对于在统计和机器学习中,对于具有重尾和有限差异数据的模型开发可靠的估计器的兴趣激增,而很少有作品则引起了无限差异的差异。本文提出了两种强大的估计器,即脊对数截断的M估计器和弹性净对数截断器的M估计器。第一个估计器应用于凸回归,例如分位数回归和广义线性模型,而另一个估计值则应用于高维非凸学习问题,例如通过 深神经网络。模拟和实际数据分析证明了对数截断的估计的{鲁棒性},而不是标准估计。
There has been a surge of interest in developing robust estimators for models with heavy-tailed and bounded variance data in statistics and machine learning, while few works impose unbounded variance. This paper proposes two type of robust estimators, the ridge log-truncated M-estimator and the elastic net log-truncated M-estimator. The first estimator is applied to convex regressions such as quantile regression and generalized linear models, while the other one is applied to high dimensional non-convex learning problems such as regressions via deep neural networks. Simulations and real data analysis demonstrate the {robustness} of log-truncated estimations over standard estimations.