论文标题

可以建模压缩改善NLP公平性

Can Model Compression Improve NLP Fairness

论文作者

Xu, Guangxuan, Hu, Qingyuan

论文摘要

模型压缩技术正在受到越来越多的关注。但是,压缩对模型公平性的影响仍在探索中。这是第一篇研究蒸馏和修剪对生成语言模型的毒性和偏见的影响的论文。我们测试了GPT2模型上的知识蒸馏和修剪方法,并发现模型蒸馏后的毒性和偏置降低模式一致。现有研究系列可以解释该结果,该研究线将模型压缩描述为正则化技术。我们的工作不仅是安全部署压缩模型的参考,而且还将“压缩为正则化”的讨论扩展到神经LMS的设置中,并暗示有可能使用压缩来开发更公平的模型。

Model compression techniques are receiving increasing attention; however, the effect of compression on model fairness is still under explored. This is the first paper to examine the effect of distillation and pruning on the toxicity and bias of generative language models. We test Knowledge Distillation and Pruning methods on the GPT2 model and found a consistent pattern of toxicity and bias reduction after model distillation; this result can be potentially interpreted by existing line of research which describes model compression as a regularization technique; our work not only serves as a reference for safe deployment of compressed models, but also extends the discussion of "compression as regularization" into the setting of neural LMs, and hints at the possibility of using compression to develop fairer models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源