Artificial Intelligence Hiring Bias Spurs Scrutiny and New Regs – Bloomberg Law

With New York City’s passage of one of the toughest U.S. laws regulating the use of artificial intelligence tools in the workplace, federal officials are signaling that they too want to scrutinize how that new technology is being used to sift through a growing job applicant pool without running afoul of civil rights laws and baking in discrimination.

The use of that new technology in hi…….

npressfetimg-6474.png

With New York City’s passage of one of the toughest U.S. laws regulating the use of artificial intelligence tools in the workplace, federal officials are signaling that they too want to scrutinize how that new technology is being used to sift through a growing job applicant pool without running afoul of civil rights laws and baking in discrimination.

The use of that new technology in hiring and other employment decisions is growing, but its volume remains hard to quantify, and the regulations aimed at combating bias in its application may be difficult to implement, academics and employment attorneys say.

“Basically, these are largely untested technologies with virtually no oversight,” said Lisa Kresge, research and policy associate at the University of California, Berkeley Labor Center, who studies the intersection of technological change and inequality. “That’s unprecedented in the workplace. We have rules about pesticides or safety on the shop floor. We have these digital technologies, and in virtual space, and that should be no different.”

The wide array of systems employers use are largely unregulated, she said. Plus, the Covid-19 pandemic exacerbated a pattern of companies constantly churning workers, clogging the hiring process and potentially prompting employers to rely more heavily on the AI tools to sift through the volume of applicants, she added.

The use of artificial intelligence for recruitment, resume screening, automated video interviews, and other tasks has for years been on regulators’ and lawmakers’ radar, as workers began filing allegations of AI-related discrimination to the U.S. Equal Employment Opportunity Commission.

The EEOC recently signaled it would delve into artificial intelligence tools and how they contribute to bias, including for hiring and employee surveillance. The civil rights agency announced it will study how employers use AI, and hear from stakeholders to provide guidance on “algorithmic fairness.”

The EEOC enforces federal civil rights laws, including Title VII of the 1964 Civil Rights Act, the Americans with Disabilities Act, and the Age Discrimination in Employment Act. Just like regular employment policies, automated tools can run afoul of these federal laws by reinforcing bias or screening out candidates of protected classes, including race, sex, national origin, or religion, officials have said.

“There are players in the AI space that aren’t savvy about compliance regimes that the more traditional methods have been living under for years or decades,” said Mark Girouard, who chairs the labor and employment practice at Nilan Johnson Lewis PA. “We are in a Wild West space when it comes to the use of these tools, and something needs to bring it into the same kind of compliance framework.”

New Laws Proposed

Employers in New York City will be banned from using automated employment decision tools to screen job candidates unless the technology has been subject to a “bias audit” conducted a year before the use of the tool. The law takes effect on Jan. 2, 2023. The companies also will be required to notify employees or candidates if the tool was used to make job decisions. The fines range from $500 to $1,500 per violation.

In the U.S. capital, District of Columbia Attorney General Karl Racine recently announced proposed legislation that would address “algorithmic discrimination” and require companies to submit to annual audits about their technology. These are among the boldest measures proposed by local governments.

“It’s the first trickle of what’s likely to become a flood,” Girouard said. “We had started to see some legislation around artificial intelligence, and this is the next step.”

There have been other efforts recently to build better consent and transparency around AI in employment.

Illinois in 2019 passed a measure aimed at artificial intelligence that required disclosure and options when video interviews were used. Several states and cities previously passed measures prohibiting employers from using facial recognition technology without applicants’ consent, including Maryland and San Francisco.

As many as 83% of employers, and as many as 90% among Fortune 500 companies, are using some form of automated tools to screen or rank candidates for hiring, EEOC chair Charlotte Burrows said at a recent conference. They can streamline employment, and support diversity efforts but the civil rights agency will be vigilant, she warned.

“They could also be used to mask or even perpetuate existing discrimination and create new discriminatory barriers to jobs,” Burrows said.

Data Evasive

There’s also been litigation, including lawsuits filed over job advertisements posted on Facebook that target certain demographics, including age.

“The issue is how it can be used,” said Samuel Estreicher, a New York University law professor and director of its Center for Labor and Employment. “Some companies get thousands of resumes, and AI can be an intelligent way to screen them. Yet, there is a lot of literature that there is a serious bias problem. We just aren’t sure how these companies are using these tools.”

Berkeley’s Kresge said the tools use bots to screen for keywords and look through qualifications, scoring and ranking candidates. The tools essentially predict how successful a job candidate will be in the position by comparing how well that person matches “top performers,” she said.

Kresge said there is very little regulatory framework around these systems. Laws have targeted disclosure and transparency, which she said is important, but only a starting point.

“We don’t know the scope of the problem. These systems basically have the potential for bias and discrimination against workers,” she said. “In the hiring space, that’s one of the biggest areas where these technologies are adopted.”

NYC Pushback

In New York, a coalition of civil rights groups, led by the Surveillance Technology Oversight Project or S.T.O.P., warned city officials that the new measure to tamp down on algorithmic bias will “rubber-stamp discrimination.”

They argued the weak projections would backfire, and enable more biased AI software. The groups, who signed a 2020 letter to the City Council’s Democratic majority leader, Laurie Cumbo, pointing to the ineffectiveness of the law, included the NAACP Legal Defense Fund, the National Employment Law Project, and the New York Civil Liberties Union.

“New York should be outlawing biased tech, not supporting it,” said S.T.O.P.’s executive director, Albert Fox Cahn. “Rather than blocking discrimination, this weak measure will encourage more companies to use biased and invasive AI tools.”

While aspects of the law are insufficient, it could be a step in the right direction because there is great need for oversight of the mechanisms used in the workplace, said Julia Stoyanovich, an N.Y.U. professor of computer science and engineering.

“My main issue with these tools is that we don’t know whether they work,” she said of the artificial intelligence technology.

New York is likely the first city to enforce a “bias audit,” she said, but certain traits aren’t covered under the law, including disability and age discrimination. The audit only requires screening for race, ethnicity, and sex, Stoyanovich said, adding that the details of how the audit is done aren’t spelled out, and it could be easy to meet the law’s requirements.

“The fear is that companies will use this as an endorsement, audit themselves then put up a smiley face, and it’s going to be counterproductive,” she said.

Source: https://news.bloomberglaw.com/daily-labor-report/artificial-intelligence-hiring-bias-spurs-scrutiny-and-new-regs