...
首页> 外文期刊>Decision support systems >Transparency and accountability in AI decision support: Explaining and visualizing convolutional neural networks for text information
【24h】

Transparency and accountability in AI decision support: Explaining and visualizing convolutional neural networks for text information

机译:AI决策支持中的透明度和问责制:解释和可视化文本信息的卷积神经网络

获取原文
获取原文并翻译 | 示例
           

摘要

Proliferating applications of deep learning, along with the prevalence of large-scale text datasets, have revolutionized the natural language processing (NLP) field, thereby driving the recent explosive growth. Nevertheless, it is argued that state-of-the-art studies focus excessively on producing quantitative performances superior to existing models, by playing "the Kaggle game." Hence, the field requires more effort in solving new problems and proposing novel approaches and architectures. We claim that one of the promising and constructive efforts would be to design transparent and accountable artificial intelligence (AI) systems for text analytics. By doing so, we can enhance the applicability and problem-solving capacity of the system for real-world decision support. It is widely accepted that deep learning models demonstrate remarkable performances compared to existing algorithms. However, they are often criticized for being less interpretable, i.e., the "black box." In such cases, users tend to hesitate to utilize them for decision-making, especially in crucial tasks. Such complexity obstructs transparency and accountability of the overall system, potentially debilitating the deployment of decision support systems powered by AI. Furthermore, recent regulations are emphasizing fairness and transparency in algorithms to a greater extent, turning explanations more compulsory than voluntary. Thus, to enhance the transparency and accountability of the decision support system and preserve the capacity to model complex text data at the same time, we propose the Explaining and Visualizing Convolutional neural networks for Text information (EVCT) framework. By adopting and ameliorating cutting-edge methods in NLP and image processing, the EVCT framework provides a human-interpretable solution to the problem of text classification while minimizing information loss. Experimental results with large-scale, real-world datasets show that EVCT performs comparably to benchmark models, including widely used deep learning models. In addition, we provide instances of human-interpretable and relevant visualized explanations obtained from applying EVCT to the dataset and possible applications for real-world decision support.
机译:深度学习的增殖应用以及大规模文本数据集的普遍性,彻底改变了自然语言处理(NLP)领域,从而推动了最近的爆炸性生长。然而,有人认为,通过演奏“动摇游戏”,最先进的研究焦点过度产生优于现有模型的定量表演。因此,该领域需要更多努力解决新问题并提出新颖的方法和架构。我们声称,有希望和建设性的努力之一是为文本分析设计透明和负责人工智能(AI)系统。通过这样做,我们可以提高系统的实际决策支持的适用性和解决问题。众所周知,与现有算法相比,深入学习模型表现出显着的性能。然而,他们经常被批评,因为不太可接定,即“黑匣子”。在这种情况下,用户倾向于犹豫利用它们进行决策,特别是在关键任务中。这种复杂性阻碍了整个系统的透明度和问责制,可能会使由AI提供支持的决策支持系统的部署。此外,最近的法规在更大程度上强调了算法的公平和透明度,更加强制性的解释而不是自愿。因此,为了提高决策支持系统的透明度和问责制,并在同时保留模拟复杂文本数据的能力,我们提出了用于文本信息(EVCT)框架的解释和可视化卷积神经网络。通过在NLP和图像处理中采用和改善尖端方法,EVCT框架为文本分类问题提供了人类可解释的解决方案,同时最小化信息丢失。实验结果具有大规模,现实世界数据集显示,EVCT与基准模型相媲美,包括广泛使用的深度学习模型。此外,我们提供人类可解释的实例,并从应用EVCT到数据集和可能的实际决策支持的可能应用程序获得的相关可视化解释的实例。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号