【24h】

3 HUMANITARIANS

机译:3人道主义者

获取原文
获取原文并翻译 | 示例
           

摘要

Himabindu Lakkaraju designed an artificial-intelligence program that serves as a bias check for decision makers like judges and doctors.Machine learning and Al are increasingly used in law enforcement to make decisions about which defendants get bail, in health care to determine medical treatments, and at financial institutions to decide who gets loans. Automated decision making can have pitfalls-software can miss the nuance that a human might catch when looking at a criminal, medical, or credit record. But humans can also miss nuances, and they have their own biases-especially when they're pressed for time and have to make life-altering decisions.Lakkaraju's system doesn't rely solely on human choices or on machine learning but uses a combination of the two. Most of her work deals with data sets in which she could see the expected outcomes from both Al and human decisions, and spot where bias might occur.
机译:Himabindu Lakkaraju设计了一个人工智能计划,作为法官和医生等决策者的偏见检查。机器学习和AL越来越多地用于执法,使其决定被告获得保释,在医疗保健中确定医疗治疗,以及金融机构决定谁获得贷款。自动化决策可能会有陷阱 - 软件可能会错过人类在看刑事,医疗或信用记录时可能会捕获的细微差别。但人类也可以错过细微差别,他们有自己的偏见 - 特别是当他们被迫时间而且必须制定生命改变的决定。Lakkaraju的系统并不完全依赖于人类选择或机器学习,而是使用两者的组合。她的大多数工作涉及数据集,其中她可以看到al和人为决定的预期结果,以及可能发生偏见的地方。

著录项

  • 来源
    《Technology Review》 |2019年第4期|101-102|共2页
  • 作者

  • 作者单位
  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号