Organizations have relied on artificial intelligence to help them battle discrimination that creeps into their workplace processes, but even technology may not be immune to implicit biases.
While AI has been used to help employers screen resumes, find qualified candidates, expand DEI initiatives, provide inclusive benefits and more, the way technology is programmed can leave room for bias to creep in, according to experts at Tuesday’s HR Transform conference in Las Vegas.
“Artificial intelligence promises that it’s being used to remove the human bias by removing the human,” said Keith Sonderling, commissioner at the U.S. Equal Employment Opportunity Commission. “But that’s a function of what is fed into the AI in the first place.”
Read more:
The machine itself can learn to discriminate, said Glen Cathey, head of digital strategy at Randstad. At his firm, they’ve utilized AI chatbots to screen and schedule candidate interviews, hiring nearly 300,000 employees and interviewing more than one million people using these programs. Yet simply removing the human doesn’t mean the programs are fail-safe.
“If you’re a human, you have bias,” Cathay said. “When you use AI and use it at scale, you’ll have variability person-to-person, but if there’s systemic bias that’s built in, AI will fundamentally apply this bias across potentially millions of people.”
Soderling explained that companies don’t intentionally try to screen out applicants, but often base their algorithms on top-performing employees. In one case, a company that asked AI to screen for the qualities of their best employees returned data that said those candidates were “named Jared and played high school lacrosse,” he said.
“Machine learning looks at patterns,” Soderling said. “So even though there was no intent [to discriminate], the result was significant discrimination.”
Read more:
Common mistakes employers make when building their algorithms include not using complete data sets, or neglecting to see how data may intersect, said Rajamma Krishnamurthy, senior director of HR technology at Microsoft. For example, AI can screen for race, gender and ethnicity, but the data should look at how those applications interact with each other.
“You can’t mitigate for bias if you can’t measure it in the first place,” Krishnamurthy said. “The more data, the better the decisions are.”
She advised employers to share data and create an open source environment where all data can be accessed, so employers have more to build on. Language should also be as objective as possible when laying out skills and qualifications employers are looking for. Finally, employers should consider whether they actually need AI, instead of just reacting to what’s popular in the market.
“Developers should see if they need an AI solution, instead of just jumping into the need of the day,” Krishnamurthy said. “And then after the solution is out there, keep monitoring for changing circumstances.”