New York City is trying to rein in the use of algorithms used to screen job applicants. It’s one of the first cities in the U.S. to try to regulate what is an increasingly common — and opaque — hiring practice.
The city council is considering a bill that would require potential employers to notify job candidates about the use of these tools, referred to as “automated decision systems.” Companies would also have to complete an annual audit to make sure the technology doesn’t result in bias.
The move comes as the use of artificial intelligence in hiring skyrockets, increasingly replacing human screeners. Fortune 500 companies including Delta, Dunkin, Ikea and Unilever have turned to AI for help assessing job applicants. These tools run the gamut from a simple text reader that screens applications for particular words and phrases, to a system that evaluates videos of potential applicants to judge their suitability for the job.
“We have all the reasons to believe that every major company uses some algorithmic hiring,” Julia Stoyanovich, a founding director of the Center for Responsible AI at New York University, said in a recent webinar.
At a time when New Yorkers are suffering double-digit unemployment, legislators are concerned about the brave new world of digital hiring. Research has shown that AI systems can introduce more problems than they solve. Facial-recognition tools that use AI have demonstrated trouble in identifying faces of Black people, determining people’s sex and wrongly matching members of Congress to a mugshot database.
In perhaps the most notorious example of AI bias, a hiring tool developed internally at Amazon had to be scrapped because it discriminated against women. The tool was developed using a 10-year history of resumes submitted to the company, whose workforce skews male. As a result, the software effectively “taught” itself that male candidates were preferable and demoted applications that included the word “women,” or the names of two all-women’s colleges. While the tool was never used, it demonstrates the potential pitfalls of substituting machine intelligence for human judgment.
“As legislators in a city home to some of the world’s largest corporations, we must intervene and prevent unjust hiring,” city council member Laurie Cumbo, the bill’s sponsor, said at a hearing for the legislation last week.
Not tough enough
Several civil rights groups say New York’s proposed bill doesn’t go far enough. A dozen groups including the AI Now Institute, New York Civil Liberties Union and New York Communities for Change issued a letter last week pushing for the law to cover more types of automated tools and more steps in the hiring process. They want the measure to include heavier penalties, enabling people to sue if they’ve been passed over for a job because of biased algorithms. This would be in line with existing employment law, which allows applicants to sue for discrimination because of race or sex.
“If we pass [the bill] as it is worded today, it will be a rubber stamp for some of the worst forms of algorithmic discrimination,” Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, told the city council.
“We need much stronger penalties,” he said. “Just as we do with every other form of employment discrimination, we need that private-sector enforcement.”
Alicia Mercedes, a spokesperson for Cumbo, the bill’s sponsor, said the bill is still in its early stages and is likely to change in response to feedback.
“We’re committed to seeing this legislation come out as something that can be effective, so we will of course take any input that we can get from those who are working on these issues every day,” Mercedes said.
Less time, better results?
For hiring professionals, the main appeal of AI is its capacity to save time. But technologists have also touted the potential for automated programs, if used correctly, to eliminate human biases, such as the well-documented tendency for hiring managers to overlook African-American applicants or look favorably on candidates who physically resemble the hiring manager.
“When only a human reviews a resume, unfortunately, humans can’t un-see the things that cause unconscious biases — if someone went to the same alma mater or grew up in same community,” said Athena Karp, CEO of HiredScore, an AI hiring platform.
Karp said she supports the New York bill. “If technologies are used in hiring, the makers of technology, and candidates, can and should know how they’re being used,” she said at the hearing.
In the U.S., the only place where this is currently the case is in Illinois, whose Biometric Privacy Act requires employers to tell candidates if AI is being used to evaluate them and allows the candidates to opt out. On the federal level, a bill to study bias in algorithms has been introduced in Congress. In New York, most job candidates have no clue they’re being screened by software — even those who are computer scientists themselves.
“I’ve received my fair share of job and internship rejections in my graduate and undergraduate careers,” said Lauren D’Arinzo, a master’s degree candidate in data science and AI at New York University. “It is unsettling to me that a future employer might disregard my application based on the output of an algorithm.”
She added, “What worries me most is, had I not been recruited into a project explicitly doing research in this space, I would likely not have even known that these types of tools are regularly used by Fortune 500 companies.”