【24h】

TheGiaphic AI cyber-bullying detection

机译:TheGiaphic AI cyber-bullying detection

获取原文
获取原文并翻译 | 示例
           

摘要

BILLIONS OF messages are sent to social media networks every day, which makes the work of content curators tough. E&T wanted to know to what extent artificial intelligence (AI) can detect a toxic threat and where it falls short. We collaborated with researchers to put a state-of-the-art AI system to the test, asking it to decide which texts contained toxic content. The experiment used a new semantic text system developed by researchers at the University of Exeter Business School. E&T submitted more than 13,000 tweets with the hashtag 'cyberbullying'. David Lopez, lecturer for digital economy, created the tool, called LOLA, to detect misinformation, cyber bullying and other harmful online content. He categorised each of our tweets into various categories of emotions, showing how 'toxic', 'severely toxic' or 'obscene' they were, and whether they contained 'an insult', 'a threat', or 'identity hate'. Each received a score ranging between 0 and 1, zero being not at all associated with the measure, and one having the strongest association.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号