“History is written by the victors,” Winston Churchill famously said. (The irony, I’m sure, was unintentional; Churchill himself being one of those ‘victors’ who, while best known for leading Western Europe through WWII, was a white, straight, cis-het man with violent imperialist ambitions and was unabashedly racist.)
In artificial intelligence (AI), too, it seems, future predictions — whether of company performance, advertising effectiveness, and in the recent case of Amazon’s AI recruiting and hiring tool — are also determined by the “victors.”
Reuters reported last week that, like a lot of large enterprise tech companies, Amazon attempted to leverage AI and machine learning to help its HR teams process the multitude of resumes and job applications it received. In simple terms, Amazon fed its AI tool historical data on past successful hires and tasked it with looking for similar traits, skills, and experience in new resumes.
Can you guess what happened? The AI displayed a bias against women. This is not surprising, nor is this the first incident of its kind.
I was talking yesterday with Chris Nicholson, CEO of AI/deep learning company Skymind and the co-creator of the Deeplearning4J framework, about this exact problem.
“AI is only as smart and as accurate as the historical data you’re feeding it. So, of course, you look at history; the literary canon – most of Western history, and if you’re an AI, you assume, ‘Oh, I guess only white dudes accomplished anything,’” he said. “You’re feeding it biased data, so of course it’s going to spit out biased results!”
Ankit Somani, co-founder of AI recruiting software company AllyO, said the same earlier this year.
“In tech, if you’re looking at candidates for technical roles, the majority are going to be white men. So if you just throw all those inputs into the system and go with whatever comes out, then you’ll see your system making the correlation between, say, a developer and then associating that with a white man,” Somani told me. “Based on the data it received, it’s not wrong. But if you aren’t understanding how the current lack of diversity is impacting these systems, then you can exacerbate the problem.”
Even when Amazon engineers edited the tool to mitigate the bias, there wasn’t a guarantee that the tool wouldn’t devise other discriminatory ways to sort candidates — so the tool was scrapped.
There’s obviously a lot of potential for AI in the recruiting space, but first the huge problem of bias has to be addressed, otherwise it’s just exacerbating tech’s representation problem. Unfortunately, as Nihar Shah, who teaches machine learning at Carnegie Mellon University, said in the Reuters article, there is still much work to do.
“How to ensure that the algorithm is fair, how to make sure the algorithm is really interpretable and explainable — that’s still quite far off,” he said.