In their book Building an Inclusive Organization: Leveraging the power of a diverse workforce, Stephen Frost and Raafi-Karim Alidina cite examples of Al perpetrating some horrific displays of prejudice. In the US, for example, the think tank ProPublica analysed an algorithm called COMPAS that was designed to predict the likelihood of criminal defendants reoffending in Florida. The study ultimately found that COMPAS was "twice as likely to misclassify black defendants as more likely to reoffend than their white counterparts". Last year, Amazon discovered that its candidate-filtering systems didn't seem to like women. As Reuters reported, this was because "Amazon's computer models were trained to vet applicants by observing patterns in the resumes submitted to the company over a IO-year period. Most came from men, a reflection of male dominance across the tech industry." So why are the algorithms misfiring? Kate Glazebrook, founder and CEO of recruitment platform Applied, says it's important to remember who programmes the code: "Al often backfires, baking in existing biases."
展开▼