首页> 外文会议>International Joint Conference on Artificial Intelligence >mdfa: Multi-Differential Fairness Auditor for Black Box Classifiers
【24h】

mdfa: Multi-Differential Fairness Auditor for Black Box Classifiers

机译:MDFA:黑匣子分类器的多差分公平审计员

获取原文

摘要

Machine learning algorithms are increasingly involved in sensitive decision-making processes with adverse implications on individuals. This paper presents mdfa, an approach that identifies the characteristics of the victims of a classifier's discrimination. We measure discrimination as a violation of multi-differential fairness. Multi-differential fairness is a guarantee that a black box classifier's outcomes do not leak information on the sensitive attributes of a small group of individuals. We reduce the problem of identifying worst-case violations to matching distributions and predicting where sensitive attributes and classifier's outcomes coincide. We apply mdfa to a recidivism risk assessment classifier and demonstrate that for individuals with little criminal history, identified African-Americans are three-times more likely to be considered at high risk of violent recidivism than similar non-African-Americans.
机译:机器学习算法越来越多地涉及具有对个人不利影响的敏感决策过程。本文提出了MDFA,一种识别分类器歧视的受害者特征的方法。我们衡量歧视作为违反多差分公平性的歧视。多差分公平是一种保证,黑匣子分类器的结果不会泄露关于一小组个人的敏感属性的信息。我们减少了识别匹配分布和预测敏感属性和分类器的结果重合的最坏情况违规的问题。我们将MDFA应用于累犯风险评估分类机构,并证明,对于犯罪史无众的个人,确定的非洲裔美国人比类似非非裔美国人的暴力累犯的高风险更容易被视为3倍。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号