...
首页> 外文期刊>Information communication and society >Nonhuman humanitarianism: when 'AI for good' can be harmful
【24h】

Nonhuman humanitarianism: when 'AI for good' can be harmful

机译:非人人道主义:当“善于善”时可能有害

获取原文
获取原文并翻译 | 示例
           

摘要

Artificial intelligence (AI) applications have been introduced in humanitarian operations in order to help with the significant challenges the sector is facing. This article focuses on chatbots which have been proposed as an efficient method to improve communication with, and accountability to affected communities. Chatbots, together with other humanitarian AI applications such as biometrics, satellite imaging, predictive modelling and data visualisations, are often understood as part of the wider phenomenon of 'AI for social good'. The article develops a decolonial critique of humanitarianism and critical algorithm studies which focuses on the power asymmetries underpinning both humanitarianism and AI. The article asks whether chatbots, as exemplars of 'AI for good', reproduce inequalities in the global context. Drawing on a mixed methods study that includes interviews with seven groups of stakeholders, the analysis observes that humanitarian chatbots do not fulfil claims such as 'intelligence'. Yet AI applications still have powerful consequences. Apart from the risks associated with misinformation and data safeguarding, chatbots reduce communication to its barest instrumental forms which creates disconnects between affected communities and aid agencies. This disconnect is compounded by the extraction of value from data and experimentation with untested technologies. By reflecting the values of their designers and by asserting Eurocentric values in their programmed interactions, chatbots reproduce the coloniality of power. The article concludes that 'AI for good' is an 'enchantment of technology' that reworks the colonial legacies of humanitarianism whilst also occluding the power dynamics at play.
机译:人工智能(AI)申请已在人道主义行动中引入,以帮助有关该行业面临的重大挑战。本文重点介绍聊天,这些方法已被提出为改善与受影响社区的沟通和问责制的有效方法。聊天,与其他人道主义AI应用一起,如生物识别,卫星成像,预测建模和数据视态,通常被理解为“社会良好”的宽阔现象的一部分。本文培养了人道主义主义和关键算法研究的脱殖主义批评,专注于支撑人道主义和AI的权力不对称。文章询问聊天栏,作为“良好”的平方例,在全球背景下重现不平等。绘制混合方法研究,包括与七组利益相关者的采访,分析观察,人道主义聊天不符合“情报”等索赔。然而AI应用程序仍然具有强大的后果。除了与错误信息相关的风险之外,聊天禁止将沟通减少到其最糟糕的乐器形式,这些形式会产生受影响的社区和援助机构之间的断开。这种断开通过利用未经测试的技术提取数据和实验的提取复合。通过反映其设计人员的价值,并通过在编程的相互作用中断言欧洲中心值,聊天波是重现权力的殖民地。这篇文章的结论是,“善于善”是一种“对技术的魅力”,返回人道主义的殖民主义殖民主义的殖民主义遗产,同时也封闭了戏剧的动力动态。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号