Home Politics Seven AI companies agree on safeguards after pressure from the White House – UnlistedNews

Seven AI companies agree on safeguards after pressure from the White House – UnlistedNews

0
Seven AI companies agree on safeguards after pressure from the White House

 – UnlistedNews

Seven leading AI companies in the United States have agreed to voluntary safeguards on the development of the technology, the White House announced Friday, pledging to fight for safety and trust even as they compete over the potential of artificial intelligence.

The seven companies—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI—will formally announce their commitment to the new standards in a meeting with President Biden at the White House on Friday afternoon.

The announcement comes as companies compete to outdo each other with versions of AI that offer powerful new tools to create text, photos, music and video without human intervention. But technological advances have raised fears that the tools will facilitate the spread of misinformation and dire warnings of “extinction risk” as self-aware computers evolve.

On Wednesday, Facebook’s parent company Meta announced its own artificial intelligence tool called Llama 2 and said it would release the underlying code to the public. Nick Clegg, Meta’s president of global affairs, said in a statement that his company supports the safeguards developed by the White House.

“We are pleased to make these voluntary commitments along with others in the industry,” said Mr. Clegg. “They are an important first step in ensuring that responsible guardrails are set for AI and create a model for other governments to follow.”

The voluntary safeguards announced Friday are just a first step as Washington and governments around the world establish legal and regulatory frameworks for the development of artificial intelligence. White House officials said the administration was working on an executive order that would go beyond Friday’s announcement and support the development of bipartisan legislation.

“The companies that are developing these emerging technologies have a responsibility to ensure that their products are secure,” the administration said in a statement announcing the agreements. The statement said that companies must “uphold the highest standards to ensure that innovation does not come at the expense of the rights and safety of Americans.”

As part of the settlement, the companies agreed to:

  • Security testing of its AI products, in part by independent experts, and to share information about its products with governments and others trying to manage technology risks.

  • Ensure that consumers can detect AI-generated material by implementing watermarks or other means to identify generated content.

  • Publicly report the capabilities and limitations of your systems on a regular basis, including security risks and evidence of bias.

  • Deploy advanced artificial intelligence tools to address society’s biggest challenges, like curing cancer and combating climate change.

  • Conduct research on the risks of bias, discrimination and invasion of privacy from the spread of AI tools.

“AI’s track record shows the insidiousness and prevalence of these dangers, and companies are committed to implementing AI that mitigates them,” the Biden administration statement said Friday before the meeting.

The agreement is unlikely to stop efforts to pass laws and impose regulations on emerging technology. Lawmakers in Washington are racing to catch up with rapid advances in artificial intelligence. And other governments are doing the same.

Last month, the European Union moved quickly in considering more far-reaching efforts to regulate the technology. Legislation proposed by the European Parliament would place strict limits on some uses of AI, including facial recognition, and would require companies to disclose more data about their products.

Source

LEAVE A REPLY

Please enter your comment!
Please enter your name here