论文标题
机器学习解释以防止假新闻检测过度怀信
Machine Learning Explanations to Prevent Overtrust in Fake News Detection
论文作者
论文摘要
在后真时代,打击虚假新闻和错误信息传播是一项艰巨的任务。新闻提要和搜索算法可能会导致虚假和捏造信息的大规模大规模传播,用户被暴露于算法选择的错误内容。我们的研究调查了新闻审查平台中嵌入的可解释的AI助手的影响,以打击虚假新闻的传播。我们设计了新闻审核和共享界面,创建新闻故事的数据集,并培训四种可解释的假新闻检测算法,以研究算法透明度对最终用户的影响。我们提出了来自多个受控众包研究的评估结果和分析。为了深入了解可解释的AI系统,我们在解释过程中讨论了用户参与,心理模型,信任和绩效指标之间的互动。研究结果表明,解释有助于参与者在不同条件下建立适当的智能助手心理模型,并相应地调整其信任,以实现模型限制。
Combating fake news and misinformation propagation is a challenging task in the post-truth era. News feed and search algorithms could potentially lead to unintentional large-scale propagation of false and fabricated information with users being exposed to algorithmically selected false content. Our research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news. We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms to study the effects of algorithmic transparency on end-users. We present evaluation results and analysis from multiple controlled crowdsourced studies. For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining. The study results indicate that explanations helped participants to build appropriate mental models of the intelligent assistants in different conditions and adjust their trust accordingly for model limitations.