论文标题

反事实评估推荐系统中的解释

Counterfactually Evaluating Explanations in Recommender Systems

论文作者

Yao, Yuanshun, Wang, Chong, Li, Hang

论文摘要

现代推荐系统面临越来越需要解释其建议的需求。尽管这一领域取得了长足的进步,但评估解释的质量仍然是研究人员和从业者的重大挑战。先前的工作主要进行人类研究以评估解释质量,这通常是昂贵,耗时且容易发生人类偏见的质量。在本文中,我们提出了一种脱机评估方法,该方法可以在不参与的情况下进行计算。为了评估解释,我们的方法量化了其对建议的反事实影响。为了验证我们方法的有效性,我们进行了在线用户研究。我们表明,与常规方法相比,我们的方法可以产生与实际人类判断更相关的评估评分,因此可以成为人类评估的更好代理。此外,我们表明人类认为具有高评估分数的解释是更好的。我们的发现突出了使用反事实方法作为评估建议解释的一种可能方法的有希望的方向。

Modern recommender systems face an increasing need to explain their recommendations. Despite considerable progress in this area, evaluating the quality of explanations remains a significant challenge for researchers and practitioners. Prior work mainly conducts human study to evaluate explanation quality, which is usually expensive, time-consuming, and prone to human bias. In this paper, we propose an offline evaluation method that can be computed without human involvement. To evaluate an explanation, our method quantifies its counterfactual impact on the recommendation. To validate the effectiveness of our method, we carry out an online user study. We show that, compared to conventional methods, our method can produce evaluation scores more correlated with the real human judgments, and therefore can serve as a better proxy for human evaluation. In addition, we show that explanations with high evaluation scores are considered better by humans. Our findings highlight the promising direction of using the counterfactual approach as one possible way to evaluate recommendation explanations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源