首页> 外文期刊>BMC Medical Informatics and Decision Making >Examining the effect of explanation on satisfaction and trust in AI diagnostic systems
【24h】

Examining the effect of explanation on satisfaction and trust in AI diagnostic systems

机译:检查解释对AI诊断系统满意度和信任的影响

获取原文
       

摘要

Artificial Intelligence has the potential to revolutionize healthcare, and it is increasingly being deployed to support and assist medical diagnosis. One potential application of AI is as the first point of contact for patients, replacing initial diagnoses prior to sending a patient to a specialist, allowing health care professionals to focus on more challenging and critical aspects of treatment. But for AI systems to succeed in this role, it will not be enough for them to merely provide accurate diagnoses and predictions. In addition, it will need to provide explanations (both to physicians and patients) about why the diagnoses are made. Without this, accurate and correct diagnoses and treatments might otherwise be ignored or rejected. It is important to evaluate the effectiveness of these explanations and understand the relative effectiveness of different kinds of explanations. In this paper, we examine this problem across two simulation experiments. For the first experiment, we tested a re-diagnosis scenario to understand the effect of local and global explanations. In a second simulation experiment, we implemented different forms of explanation in a similar diagnosis scenario. Results show that explanation helps improve satisfaction measures during the critical re-diagnosis period but had little effect before re-diagnosis (when initial treatment was taking place) or after (when an alternate diagnosis resolved the case successfully). Furthermore, initial “global” explanations about the process had no impact on immediate satisfaction but improved later judgments of understanding about the AI. Results of the second experiment show that visual and example-based explanations integrated with rationales had a significantly better impact on patient satisfaction and trust than no explanations, or with text-based rationales alone. As in Experiment 1, these explanations had their effect primarily on immediate measures of satisfaction during the re-diagnosis crisis, with little advantage prior to re-diagnosis or once the diagnosis was successfully resolved. These two studies help us to draw several conclusions about how patient-facing explanatory diagnostic systems may succeed or fail. Based on these studies and the review of the literature, we will provide some design recommendations for the explanations offered for AI systems in the healthcare domain.
机译:人工智能有可能彻底改变医疗保健,越来越多地部署以支持和协助医疗诊断。 AI的一个潜在应用是患者的第一接触点,在向专家发送患者之前替换初步诊断,允许医疗保健专业人员关注更具挑战性和治疗的关键方面。但对于AI系统成功地取得了这个角色,他们将不足以提供准确的诊断和预测。此外,它需要提供关于为什么诊断制作的解释(两者患者和患者)。如果没有这种情况,可能会忽略或拒绝准确和正确的诊断和治疗。重要的是要评估这些解释的有效性,了解不同类型解释的相对有效性。在本文中,我们在两个模拟实验中检查了这个问题。对于第一个实验,我们测试了一个重新诊断情景,以了解本地和全球解释的影响。在第二次仿真实验中,我们在类似的诊断方案中实施了不同形式的解释。结果表明,解释有助于改善关键重新诊断期间的满意度措施,但在重新诊断之前(发生初始治疗时)或之后(备用诊断成功解决)的效果几乎没有效果。此外,关于该过程的初始“全球”解释对立即满足的影响没有影响,而是改善了对AI的理解判断。第二个实验的结果表明,与理性的基于视觉和举例的解释对患者满意度和信任的影响显着影响,而不是没有解释,或仅具有基于文本的理由。与实验1一样,这些解释主要是在再诊断危机期间立即满足的措施的影响,在重新诊断之前或一旦诊断成功解决诊断之前,就没有很少的优势。这两项研究有助于我们利用患者面向患者的解释性系统可能成功或失败的几个结论。基于这些研究和对文献的审查,我们将为医疗保健领域的AI系统提供的解释提供一些设计建议。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号