Skip to Main Content

Companies are embracing artificial intelligence (AI) to address discrimination in recruitment by removing human bias from some parts of the process. One startup, Opus AI Inc., has developed a screening system that builds anonymous profiles for candidates based on their resumes and their responses to a series of interview-style questions. The system then analyzes candidates according to how well they match the traits that employers say they want. Meanwhile, HireVue Inc. has built a video-screening platform that uses AI models to analyze interview responses, looking at traits like word choice and facial cues to help companies identify strong candidates. However, there are concerns about the AI’s ability to subvert human prejudices. For example, Amazon had to cancel its AI-powered recruiting system after it was revealed that the program, which drew on historical data from a biased hiring environment, was downgrading resumes that included the word “women’s.” Others note that algorithms may have functional biases, such as struggling to read the facial expressions of non-Caucasian people. HireVue says it addresses these biases by drawing on a broad and diverse set of training data, then testing that data before it deploys an algorithm to prevent accidental bias. One academic who specializes in the ethics of machine learning notes that it is not necessarily enough to block potential discriminative variables, as the algorithms can still form biases using non-prohibited data. She also stresses the need for transparency in the testing processes of proprietary software and suggests that companies be held accountable to regulators or machine-learning experts.

Read the full article on news.medill.northwestern.edu.