Home Technology Microsoft Calls for A.I. Rules to Minimize the Technology’s Risks – UnlistedNews

Microsoft Calls for A.I. Rules to Minimize the Technology’s Risks – UnlistedNews

0
Microsoft Calls for A.I. Rules to Minimize the Technology’s Risks – UnlistedNews

Microsoft passed a series of regulations for artificial intelligence on Thursday as the company navigates concerns from governments around the world about the risks of the rapidly evolving technology.

Microsoft, which has promised to embed artificial intelligence into many of its products, has proposed regulations that include a requirement that systems used in critical infrastructure be able to shut down completely or slow down, similar to an emergency braking system on a train. The company also called for laws clarifying when additional legal obligations apply to an artificial intelligence system and labels clarifying when a computer produced an image or video.

“Businesses need to step up,” Brad Smith, Microsoft’s president, said in an interview about pushing for regulations. “The government needs to move faster.”

The call for regulations marks a boom in AI, with the launch of the ChatGPT chatbot in November generating a wave of interest. Since then, companies like Microsoft and Google’s parent company Alphabet have been quick to incorporate the technology into their products. That has fueled concerns that companies are sacrificing security to get to the next big thing before their competitors.

Lawmakers have publicly expressed concern that such AI products, which can generate text and images on their own, will create a flood of misinformation, be used by criminals and put people out of work. Regulators in Washington have vowed to keep an eye out for scammers using AI and cases where systems perpetuate discrimination or make decisions that break the law.

In response to that scrutiny, AI developers have increasingly called for some of the burden of policing the technology to be shifted to the government. Sam Altman, chief executive of OpenAI, which makes ChatGPT and counts Microsoft as an investor, told a Senate subcommittee this month that the government must regulate the technology.

The move echoes calls for new privacy or social media laws by internet companies like Google and Facebook parent Meta. In the United States, lawmakers have moved slowly on such calls, with few new federal rules on privacy or social media in recent years.

In the interview, Smith said Microsoft wasn’t trying to get rid of responsibility for managing new technology, because it was offering specific ideas and committing to some of them regardless of whether the government took action.

There is not an iota of abdication of responsibility”, he said.

He endorsed the idea, supported by Mr. Altman during his congressional testimony, that a government agency should require companies to obtain licenses to implement “high capacity” AI models.

“That means it notifies the government when it starts testing,” Smith said. “You have to share the results with the government. Even when you are licensed for deployment, you have a duty to continue to monitor it and report to the government if unexpected problems arise.”

Microsoft, which made more than $22 billion from its cloud computing business in the first quarter, also said those high-risk systems should be allowed to operate only in “licensed AI data centers.” Mr. Smith acknowledged that the company would not be “ill positioned” to offer such services, but said that many US competitors could also provide them.

Microsoft added that governments should designate certain AI systems used in critical infrastructure as “high risk” and require them to have a “security brake.” He likened that feature to “engineering brake systems that engineers have long integrated into other technology, such as elevators, school buses, and high-speed trains.”

In some sensitive cases, Microsoft said, companies that provide artificial intelligence systems should have to know certain information about their customers. To protect consumers from deception, AI-created content should be required to carry a special label, the company said.

Smith said companies should take legal “responsibility” for damages associated with AI. In some cases, he said, the responsible party could be the developer of an app like Microsoft’s Bing search engine that uses someone else’s underlying AI technology. Cloud companies may be responsible for complying with security regulations and other rules, he added.

“We don’t necessarily have the best information or the best response, or we may not be the most credible speaker,” Mr. Smith said. “But, you know, right now, especially in Washington DC, people are looking for ideas.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here