论文标题
机器学习透明度的内容和方式:使用可互操作算法组件建立定制的解释性工具
What and How of Machine Learning Transparency: Building Bespoke Explainability Tools with Interoperable Algorithmic Components
论文作者
论文摘要
基于人工智能和机器学习算法的数据驱动预测模型的解释性技术使我们能够更好地了解此类系统的运行,并有助于使它们负责。新的透明度方法以惊人的速度开发,使我们能够在这些黑匣子内窥视并解释他们的决策。这些技术中的许多被引入了整体工具,给人以有限的可自定义性给人以一种尺寸适合和端到端算法的印象。然而,这种方法通常由多个可互换的模块组成,这些模块需要调整到手头的问题以产生有意义的解释。本文介绍了动手培训材料的集合 - 幻灯片,视频录制和jupyter笔记本,这些笔记本通过构建和评估定制模块化替代解释器的过程提供指导。这些资源涵盖了该技术的三个核心构建基础:可解释的表示组成,数据采样和解释生成。
Explainability techniques for data-driven predictive models based on artificial intelligence and machine learning algorithms allow us to better understand the operation of such systems and help to hold them accountable. New transparency approaches are developed at breakneck speed, enabling us to peek inside these black boxes and interpret their decisions. Many of these techniques are introduced as monolithic tools, giving the impression of one-size-fits-all and end-to-end algorithms with limited customisability. Nevertheless, such approaches are often composed of multiple interchangeable modules that need to be tuned to the problem at hand to produce meaningful explanations. This paper introduces a collection of hands-on training materials -- slides, video recordings and Jupyter Notebooks -- that provide guidance through the process of building and evaluating bespoke modular surrogate explainers for tabular data. These resources cover the three core building blocks of this technique: interpretable representation composition, data sampling and explanation generation.