By Alexia Elejalde-Ruiz
WWR Article Summary (tl;dr) As columnist Alexia Elejalde-Ruiz points out, advocates of AI-enhanced hiring claim it reduces turnover by bringing on candidates who are a better fit.
The last time Chuck Blatt searched for a job, about 10 years ago, he relied on a thoughtful cover letter, a resume printed on nice paper and good rapport during a face-to-face interview.
Now, he said, “that is all out the window.”
Since Blatt, 50, left his job as vice president of a painting and construction company in March, he’s spent nearly every day in front of the computer in his Chicago home applying for jobs via automated processes.
He uploads his job history with the click of a button. He records videos of himself answering automated interview questions. He takes the lengthy online personality tests employers use to screen candidates.
“I have been turned down for positions that I thought I would be perfect for,” Blatt said, and it is often impossible to know why. “There is no feedback because there is no one to talk to.”
Technology is transforming hiring, as employers inundated with applications turn to sophisticated tools to recruit and screen job candidates. Many companies save time with video interviews or resume filters that scan for keywords, and those at the leading edge are using artificial intelligence in a variety of ways: chatbots that schedule interviews and answer applicant questions; web crawlers that scour mountains of data to find candidates who aren’t actively job hunting; and algorithms that analyze existing employee data to predict an applicant’s future success.
Advocates of AI-enhanced hiring claim it reduces turnover by bringing on candidates who are a better fit. They also say a data-driven approach removes bias inherent in human decision-makers who, for example, might favor candidates who graduated from their alma mater.
But critics warn of the opposite effect: that some applicants could be unfairly weeded out.
Cathy O’Neil, a mathematician and author of the 2016 book “Weapons of Math Destruction,” worries that algorithms developed to predict whether an applicant will be a good fit based on the types of employees who have been successful before could perpetuate implicit biases.
“If in the past you promoted tall white men or people who came from Harvard, that will come through in the algorithm,” O’Neil said. “Algorithms just look for patterns.”
The scoring is invisible, so even human resources departments don’t know why an applicant might have been rejected, making it difficult for anyone to challenge the process, she said.
There is also concern that algorithms and filters could quietly screen older people out, although that’s a concern with human recruiters as well. Blatt said that he has removed his college graduation date from his LinkedIn profile, plus all of his experience from the 1990s, so as not to advertise his age.
Blatt said he has landed a number of interviews, thanks to the volume of jobs he has applied to.
“A lot of it is a numbers game,” he said.
But Blatt, who is part of a networking group for executives run by JVS Chicago, a career counseling agency, said some of his older peers are so uncomfortable with automated systems that they refuse to go through with them.
Cutting costs, turnover
Much of the technology used in the hiring process shows great promise for helping employers cut costs associated with high turnover, said Natalie Pierce, co-chair of the Robotics, AI and Automation Industry Group at Littler Mendelson, a law firm that represents management.
One client, a department store that couldn’t retain cosmetics department employees, discovered through analytics that it had mistakenly assumed that hiring gregarious employees would lead to greater sales, when in fact the best salespeople were problem-solvers who invested time helping customers.
By changing the type of person it hired, the store was “greatly able to reduce training costs and attrition and increase the amount of commissions going to employees,” Pierce said.
But employers have to be careful. Algorithms designed to identify candidates similar to current high performers could screen out groups of people who are protected by anti-discrimination laws.
At a public meeting held by the Equal Employment Opportunity Commission to discuss the issue in 2016, a chief analyst at the federal agency described how an algorithm might find patterns of absences among employees with disabilities.
Even if the algorithm does not intentionally screen out people with disabilities, the impact could be discriminatory and therefore violate federal law, said Barry Hartstein, co-chair of Littler’s diversity practice.
“This is an area that the regulators are recognizing is the wave of the future,” he said. Littler’s growing AI team tests hiring algorithms to ensure they are having the intended outcomes.
The government has not filed any lawsuits based on an employer’s use of high-tech screening tools or algorithms, said Carol Miaskoff, associate legal counsel at the EEOC. But the agency is watching the trend, and employers need to be aware if the tech tools they use to hire and promote are prone to discrimination, she said.
Proving hiring discrimination is difficult because applicants rarely know for sure why they didn’t get the job, and deconstructing an algorithm presents an additional challenge, Miaskoff said. But an indicator could be the composition of the employee group used to train the algorithm, she said.
“It should be carefully constructed so that it is diverse by gender, race, age and disability,” she said.
The potential legal issues echo concerns about the growing popularity of personality tests, which have come under fire for potentially disadvantaging people with mental health issues.
Attorney Roland Behm, who filed charges with the EEOC against several national retailers after his son, Kyle, was denied jobs after completing personality tests, said they are part of the same trend of using analytics to make hiring more efficient.
“More and more goes on behind the curtain,” Behm said. “From an employee perspective, you don’t know if what’s happening is appropriate or legal because you don’t know those things are happening.”
Kyle Behm, at the time an engineering student who had been diagnosed with bipolar disorder, discovered he had failed a personality test at a grocery store in 2012 because a friend who worked there told him so.
Roland Behm had his son apply to several other retailers with personality tests. In November, home improvement retailer Lowe’s, one target of the experiment, agreed to modify its online testing process to ensure it does not prevent people with mental health disabilities from finding jobs.
Some tech firms offering AI-enhanced recruiting services say they explicitly clean their data of bias.
Take Pymetrics, which counts Unilever and Tesla among its 60 large clients.
Pymetrics creates custom recruitment algorithms based on how top employees at each client company score on online games that measure 90 different traits, such as attention or altruism. Applicants play the online games and are evaluated based on how they score on the desired qualities.
The biggest reason companies use Pymetrics is to improve the fit and diversity of their teams, CEO Frida Polli said.
To guard against bias, Pymetrics tests each algorithm against a candidate pool of about 50,000 people for whom it has demographic data.
“We have to be really cautious when applying AI to hiring because untested and unchecked it can actually make diversity a lot worse,” Polli said.
Another example is HireVue, which launched 13 years ago to help companies conduct video interviews with talent around the world.
As employers struggled to sort through the mounting video submissions, HireVue three years ago introduced assessment algorithms, which are used by nearly 100 of its 700 video clients, said CEO Kevin Parker.