AI is now being used to automate recruitment more than ever before. AI tools can scan CVs, analyse video interviews, and compare them with successful past applications to spot personality traits that fit the role, allegedly eliminating human bias.
A recent survey by Ceridian showed that 42% of executives worldwide are already using AI in recruitment, and a further 46% plan to do so. That’s basically everyone.
However, AI recruitment tools are not hypercompetent robot HR managers who can understand abstract hiring criteria the way humans can. All they know how to do is find patterns, making statistical links between features of past CVs and interviews and desired personality traits, then looking for them in new applicants.
This may not be as helpful for recruiters as it seems. The link between behavioural cues and personality is still debated by psychiatrists. The software also raises ethical concerns about privacy and consent. What’s more, it may actually perpetuate bias; if the successful past applicants are mostly white men, and you tell an AI to find people with similar behavioural cues, there are no prizes for guessing what you’ll get.
In 2019, Amazon stopped using an AI CV-scanner for exactly that reason: trained on applications that were mostly from men, the AI had become sexist. It downgraded CVs that mentioned “women’s” activities and favoured words more commonly used by men, like “captured” and “executed”. Meanwhile, a German study found that AI video analysis could be picking people based on “personality traits” like their video background, their hairstyle, and whether they wore glasses.
It’s not yet clear whether AIs can be taught to overcome this kind of bias. Training them to disregard appearance and intonation would defeat the purpose. In fact, since the AI runs on the kind of superficial differences that lead to bias, it could be more useful as a way to spot biases to avoid.
Privacy concerns also raise thorny problems; not all candidates want their personality analysed by a computer, but if they’re given the choice to opt out or select which results get shared with the employer, this could reintroduce bias: candidates might only choose to share flattering results, or interviewers could be less willing to hire those who withheld results.
Ethical AI recruitment demands highly controlled use by specialists who understand the tools. At this point, it’s worth asking whether the time and effort saved are worth the time and effort it will take to make AI recruitment truly ethical. AI analysis of hiring practices, rather than the people being hired, might be a better route to eliminate bias.