Given the huge impact that Online Social Networks (OSN)udhad in the way people get informed and form their opinion,udthey became an attractive playground for malicious entitiesudthat want to spread misinformation, and leverage their effect.udIn fact, misinformation easily spreads on OSN and is a hugeudthreat for modern society, possibly influencing also the outcomeudof elections, or even putting people’s life at risk (e.g.,udspreading “anti-vaccines” misinformation). Therefore, it isudof paramount importance for our society to have some sortudof “validation” on information spreading through OSN. Theudneed for a wide-scale validation would greatly benefit fromudautomatic tools.udIn this paper, we show that it is difficult to carry out an automaticudclassification of misinformation considering only structuraludproperties of content propagation cascades. We focus onudstructural properties, because they would be inherently dif-udficult to be manipulated, with the the aim of circumventingudclassification systems. To support our claim, we carry out anudextensive evaluation on Facebook posts belonging to conspiracyudtheories (as representative of misinformation), and scientificudnews (representative of fact-checked content). Ourudfindings show that conspiracy content actually reverberatesudin a way which is hard to distinguish from the one scientificudcontent does: for the classification mechanisms we investigated,udclassification F1-score never exceeds 0.65 during contentudpropagation stages, and is still less than 0.7 even afterudpropagation is complete.
展开▼