论文标题
一般规范中的私人凸优化
Private Convex Optimization in General Norms
论文作者
论文摘要
我们提出了一个新的框架,以差异化的凸函数优化,该函数是任意规范$ \ | \ cdot \ | $的Lipschitz。我们的算法基于一种正规的指数机制,该机制是从密度$ \ propto \ exp(-k(f+μr))$中取样的,其中$ f $是经验损失,$ r $是一种常规化剂,它是一个正规化器,与$ \ | \ cdot \ cdot \ | $相对于$ \ cdot \ | $ conse of [gopi'evean lie''2 n.22 n.22 cdot \ cdot \ | $。我们表明,这种机制满足高斯差异隐私,并通过使用凸几何形状的本地化工具来解决DP-MER(经验风险最小化)和DP-SCO(随机凸优化)。我们的框架是第一个在一般规范空间中适用于私有凸优化的框架,并直接恢复镜像下降的非私有SCO率作为隐私参数$ε\至\ infty $。作为应用程序,对于Lipschitz优化了$ \ ell_p $ norms for(1,2)$中的所有$ p \ norms,我们获得了第一个最佳的隐私 - 私人权衡权衡;对于$ p = 1 $,我们至少通过对数因素提高了最近的作品[ASI,Feldman,Koren,Talwar '21,Bassily,Guzman,Nandi '21]获得的权衡。我们的$ \ ell_p $ norm和schatten- $ p $规范优化框架与多项式时间采样器相辅相成,我们的查询复杂性明确绑定。
We propose a new framework for differentially private optimization of convex functions which are Lipschitz in an arbitrary norm $\|\cdot\|$. Our algorithms are based on a regularized exponential mechanism which samples from the density $\propto \exp(-k(F+μr))$ where $F$ is the empirical loss and $r$ is a regularizer which is strongly convex with respect to $\|\cdot\|$, generalizing a recent work of [Gopi, Lee, Liu '22] to non-Euclidean settings. We show that this mechanism satisfies Gaussian differential privacy and solves both DP-ERM (empirical risk minimization) and DP-SCO (stochastic convex optimization) by using localization tools from convex geometry. Our framework is the first to apply to private convex optimization in general normed spaces and directly recovers non-private SCO rates achieved by mirror descent as the privacy parameter $ε\to \infty$. As applications, for Lipschitz optimization in $\ell_p$ norms for all $p \in (1, 2)$, we obtain the first optimal privacy-utility tradeoffs; for $p = 1$, we improve tradeoffs obtained by the recent works [Asi, Feldman, Koren, Talwar '21, Bassily, Guzman, Nandi '21] by at least a logarithmic factor. Our $\ell_p$ norm and Schatten-$p$ norm optimization frameworks are complemented with polynomial-time samplers whose query complexity we explicitly bound.