...
首页> 外文期刊>The Journal of Artificial Intelligence Research >Confronting Abusive Language Online: A Survey from the Ethical and Human Rights Perspective
【24h】

Confronting Abusive Language Online: A Survey from the Ethical and Human Rights Perspective

机译:面对辱骂语言在线:道德和人权视角的调查

获取原文
           

摘要

The pervasiveness of abusive content on the internet can lead to severe psychological and physical harm. Significant effort in Natural Language Processing (NLP) research has been devoted to addressing this problem through abusive content detection and related sub-areas, such as the detection of hate speech, toxicity, cyberbullying, etc. Although current technologies achieve high classification performance in research studies, it has been observed that the real-life application of this technology can cause unintended harms, such as the silencing of under-represented groups. We review a large body of NLP research on automatic abuse detection with a new focus on ethical challenges, organized around eight established ethical principles: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values. In many cases, these principles relate not only to situational ethical codes, which may be context-dependent, but are in fact connected to universal human rights, such as the right to privacy, freedom from discrimination, and freedom of expression. We highlight the need to examine the broad social impacts of this technology, and to bring ethical and human rights considerations to every stage of the application life-cycle, from task formulation and dataset design, to model training and evaluation, to application deployment. Guided by these principles, we identify several opportunities for rights-respecting, socio-technical solutions to detect and confront online abuse, including ‘nudging’, ‘quarantining’, value sensitive design, counter-narratives, style transfer, and AI-driven public education applications.
机译:互联网上的滥用内容的普遍性可能导致严重的心理和身体伤害。通过滥用内容检测和相关的子区域,诸如检测仇恨语音,毒性,网络尿道等的诸如检测的自然语言处理(NLP)研究的重大努力已经致力于解决这个问题。尽管目前的技术在研究中实现了高分类性能研究,已经观察到这项技术的真实应用可能导致意外危害,例如非代表性群体的沉默。我们审查了大量关于自动滥用检测的NLP研究,并在伦理挑战上进行了新的重点,占八个既定的道德原则:隐私,问责制,安全和安全,透明度和解释性,公平和不歧视,技术控制,技术控制,专业责任,促进人类价值观。在许多情况下,这些原则不仅与情境道德规范有关,可能是依赖的情境伦理代码,但实际上与普遍的人权有关,例如隐私权,自由免受歧视的自由和言论自由。我们强调了探讨这项技术的广泛社会影响,并为应用程序生命周期的每个阶段带来伦理和人权考虑,从任务制定和数据集设计到模拟培训和评估,以应用部署。通过这些原则为指导,我们确定了尊重的若干机会尊重,社会技术解决方案,以检测和面对在线滥用,包括“熄灭”,“隔离”,价值敏感的设计,反叙事,风格转移和AI驱动的公众教育应用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号