Home Technology A Hiring Law Blazes a Path for A.I. Regulation – UnlistedNews

A Hiring Law Blazes a Path for A.I. Regulation – UnlistedNews

0
A Hiring Law Blazes a Path for A.I. Regulation – UnlistedNews

European legislators are finishing work on an AI law. The Biden administration and congressional leaders have their plans to rein in artificial intelligence. Sam Altman, CEO of OpenAI, creator of the AI ​​sensation ChatGPT, recommended the creation of a federal agency with licensing and oversight authority in Senate testimony last week. And the issue came up at the Group of 7 summit in Japan.

Amid sweeping plans and commitments, New York City has become a modest pioneer in AI regulation.

The city government passed a law in 2021 and adopted specific rules last month for a high-risk application of technology: hiring and promotion decisions. The application begins in July.

City law requires companies that use artificial intelligence software in hiring to notify candidates that an automated system is being used. It also requires companies to have independent auditors check the technology annually for bias. Candidates can request and receive information about what data is collected and analysed. Businesses will be fined for violations.

New York City’s focused approach represents an important front in AI regulation. At some point, general principles developed by governments and international organizations, experts say, need to be translated into details and definitions. Who is being affected by technology? What are the benefits and harms? Who can intervene and how?

“Without a concrete use case, you’re not in a position to answer those questions,” said Julia Stoyanovich, an associate professor at New York University and director of its Center for Responsible AI.

But even before it took effect, the New York City law has been a magnet for criticism. Public interest advocates say it doesn’t go far enough, while business groups say it’s impractical.

Complaints from both camps point to the challenge of regulating AI, which is advancing at breakneck pace with unknown consequences, provoking excitement and anxiety.

Uncomfortable compromises are inevitable.

Ms. Stoyanovich is concerned that the city law has loopholes that could weaken it. “But it’s much better than not having a law,” she said. “And until you try to regulate, you won’t learn how to do it.”

The law applies to businesses with workers in New York City, but labor experts hope it will influence practices nationwide. At least four states (California, New Jersey, New York, and Vermont) and the District of Columbia are also working on laws to regulate AI in hiring. And Illinois and Maryland have enacted laws limiting the use of specific AI technologies, often for workplace surveillance and job screening.

The New York City law arose from a clash of widely opposed viewpoints. The City Council approved it during the final days of Mayor Bill de Blasio’s administration. Rounds of hearings and public comment, more than 100,000 words long, followed, overseen by the city’s Department of Consumer and Worker Protection, the agency that writes the rules.

The result, say some critics, is too sympathetic to commercial interests.

“What could have been a landmark law has been watered down to no effect,” said Alexandra Givens, president of the Center for Democracy and Technology, a political and civil rights organization.

That’s because the law defines an “automated employment decision tool” as technology used “to assist or substantially replace discretionary decision making,” he said. The rules adopted by the city appear to construe that wording narrowly, such that AI software will require an audit only if it is the sole or primary factor in a hiring decision or is used to overrule a human, the city said. Mrs. Givens.

That sidesteps the primary way automated software is used, he said, and a hiring manager invariably makes the final decision. The potential for AI-powered discrimination, she said, typically arises in screening hundreds or thousands of candidates down to a handful or in targeted online recruiting to generate a pool of candidates.

Ms Givens also criticized the law for limiting the types of groups measured for unfair treatment. She covers bias based on gender, race and ethnicity, but not discrimination against older workers or those with disabilities.

“My biggest concern is that this becomes the template at the national level when we should be asking our legislators for so much more,” Ms. Givens said.

The law was cut to sharpen it and make sure it was focused and enforceable, city officials said. The Council and the worker protection agency listened to many voices, including public interest activists and software companies. Their goal was to weigh the trade-offs between innovation and potential harm, the officials said.

“This is a significant regulatory success in ensuring that AI technology is used ethically and responsibly,” said Robert Holden, who was the chair of the council’s technology committee when the law was passed and remains a member of the committee.

New York City is trying to address new technology in the context of federal workplace laws with hiring guidelines dating back to the 1970s. The main rule of the Equal Employment Opportunity Commission states that no selection practice or method used by employers should have a “disparate impact” on a legally protected group such as women or minorities.

Companies have criticized the law. In a presentation this year, the Software Alliance, a trade group that includes Microsoft, SAP and Workday, said the requirement for independent AI audits was “not feasible” because “the audit landscape is nascent,” lacks standards, and professional oversight bodies.

But a nascent field is a market opportunity. The AI ​​audit business, experts say, is only going to grow. It’s already attracting law firms, consultants and start-ups.

Companies that sell artificial intelligence software to help with hiring and promotion decisions have generally embraced the regulation. Some have already undergone external audits. They see the requirement as a potential competitive advantage, demonstrating that their technology broadens the pool of job candidates for companies and increases opportunities for workers.

“We think we can comply with the law and show what good AI looks like,” said Roy Wang, general counsel at Eightfold AI, a Silicon Valley start-up that makes software used to help hiring managers.

New York City law also takes an approach to regulating AI that may become the norm. The law’s key measure is an “impact ratio,” or an estimate of the effect of using the software on a protected group of job candidates. It does not delve into how an algorithm makes decisions, a concept known as “explicability.”

In life-affecting apps like hiring, critics say, people are entitled to an explanation of how a decision was made. But AI, like ChatGPT-style software, is becoming more complex, perhaps putting the goal of explainable AI out of reach, some experts say.

“The focus becomes the output of the algorithm, not how the algorithm works,” said Ashley Casovan, executive director of the Institute for Responsible AI, which is developing certifications for the safe use of AI applications in the workplace, the health care and finances.

Source

LEAVE A REPLY

Please enter your comment!
Please enter your name here