论文标题
为我玩Mnist!用户研究基于事后的,基于示例的解释和错误率对调试深度学习,黑盒分类器的影响
Play MNIST For Me! User Studies on the Effects of Post-Hoc, Example-Based Explanations & Error Rates on Debugging a Deep Learning, Black-Box Classifier
论文作者
论文摘要
本文报告了两个实验(n = 349)对事后解释的影响,例如,错误率和错误率对人们对黑匣子分类器的看法的影响。两个实验都表明,当从实施的ANN CBR TWIN系统中给予人们基于案例的解释时,他们就会认为错过的分类更正确。他们还表明,随着错误率提高到4%以上,人们会少信任分类器,并且认为它是不正确的,不那么合理和值得信赖的。讨论了这些结果对XAI的含义。
This paper reports two experiments (N=349) on the impact of post hoc explanations by example and error rates on peoples perceptions of a black box classifier. Both experiments show that when people are given case based explanations, from an implemented ANN CBR twin system, they perceive miss classifications to be more correct. They also show that as error rates increase above 4%, people trust the classifier less and view it as being less correct, less reasonable and less trustworthy. The implications of these results for XAI are discussed.