论文标题
保留移动传感器数据的隐私权
Privacy Preserving Release of Mobile Sensor Data
论文作者
论文摘要
移动智能设备中嵌入的传感器可以以高精度监视用户的活动,以提供各种服务,以从精确的地理位置,健康监控和手写单词识别等最终用户。但是,这涉及访问和潜在地披露个人对可能导致隐私漏洞的应用程序的敏感信息的风险。在本文中,我们旨在最大程度地减少可能通过用户跟踪和可区分性能在保留应用程序和服务功能的情况下在移动设备上识别用户识别的隐私泄漏。我们提出了一种隐私性的机制,该机制通过将数据作为时间序列建模和预测来有效地处理传感器数据波动(例如,步行,坐着和在不同时间运行时不一致的传感器读数)。所提出的机制还使用了相关的噪声序列的概念,以抵抗对手的噪声过滤攻击,该噪声过滤攻击旨在滤除扰动数据中的噪声以重新识别原始数据。与现有的解决方案不同,我们的机制在没有用户或服务提供商相互作用的情况下保持隔离状态。我们在基准数据集上执行了严格的实验,并表明我们提出的机制限制了用户跟踪和与原始数据相比,在很大程度上限制了性能的威胁,同时保持了合理的功能效用水平。通常,我们表明我们的混淆机制在所有数据集中将用户跟踪性威胁降低了60 \%,同时将公用事业损失保持在0.5平均绝对误差以下(MAE)。我们还观察到我们的机制在大型数据集中更有效。例如,使用SWIPES数据集,区分性风险平均降低60 \%,而公用事业损失低于0.5 MAE。
Sensors embedded in mobile smart devices can monitor users' activity with high accuracy to provide a variety of services to end-users ranging from precise geolocation, health monitoring, and handwritten word recognition. However, this involves the risk of accessing and potentially disclosing sensitive information of individuals to the apps that may lead to privacy breaches. In this paper, we aim to minimize privacy leakages that may lead to user identification on mobile devices through user tracking and distinguishability while preserving the functionality of apps and services. We propose a privacy-preserving mechanism that effectively handles the sensor data fluctuations (e.g., inconsistent sensor readings while walking, sitting, and running at different times) by formulating the data as time-series modeling and forecasting. The proposed mechanism also uses the notion of correlated noise-series against noise filtering attacks from an adversary, which aims to filter out the noise from the perturbed data to re-identify the original data. Unlike existing solutions, our mechanism keeps running in isolation without the interaction of a user or a service provider. We perform rigorous experiments on benchmark datasets and show that our proposed mechanism limits user tracking and distinguishability threats to a significant extent compared to the original data while maintaining a reasonable level of utility of functionalities. In general, we show that our obfuscation mechanism reduces the user trackability threat by 60\% across all the datasets while maintaining the utility loss below 0.5 Mean Absolute Error (MAE). We also observe that our mechanism is more effective in large datasets. For example, with the Swipes dataset, the distinguishability risk is reduced by 60\% on average while the utility loss is below 0.5 MAE.