BILLIONS OF messages are sent to social media networks every day, which makes the work of content curators tough. E&T wanted to know to what extent artificial intelligence (AI) can detect a toxic threat and where it falls short. We collaborated with researchers to put a state-of-the-art AI system to the test, asking it to decide which texts contained toxic content. The experiment used a new semantic text system developed by researchers at the University of Exeter Business School. E&T submitted more than 13,000 tweets with the hashtag 'cyberbullying'. David Lopez, lecturer for digital economy, created the tool, called LOLA, to detect misinformation, cyber bullying and other harmful online content. He categorised each of our tweets into various categories of emotions, showing how 'toxic', 'severely toxic' or 'obscene' they were, and whether they contained 'an insult', 'a threat', or 'identity hate'. Each received a score ranging between 0 and 1, zero being not at all associated with the measure, and one having the strongest association.
展开▼