Home Technology Let’s calm down on AI ‘extinction risk’ and focus on AI regulation – UnlistedNews

Let’s calm down on AI ‘extinction risk’ and focus on AI regulation – UnlistedNews

0
Let’s calm down on AI ‘extinction risk’ and focus on AI regulation

 – UnlistedNews

During a hot minute last week, it seemed like we were already on the verge of killer AI.

Several news outlets reported that a military drone attacked its operator after deciding that the human was getting in the way of its target. Except this turned out to be a simulation. And then it became known that the simulation itself did not happen. An Air Force colonel had mistakenly described a thought experiment as real at a conference.

Still, lies travel halfway around the world before the truth laces its boots, and the story will surely seep into our unconscious, collective concerns about the threat of AI to the human race, an idea that has gained traction thanks to the warnings from two “godfathers”. from AI and two open letters on existential risk.

Deep-seated fears in our culture about runaway gods and machines are being unleashed, but everyone needs to calm down and take a closer look at what’s really going on here.

First, let’s acknowledge the cohort of computer scientists who have long believed that AI systems like ChatGPT need to be more carefully aligned with human values. They propose that if you design the AI ​​to follow principles like integrity and kindness, they are less likely to turn around and try to kill us all in the future. I have no problem with these scientists.

But in recent months, the idea of ​​a threat of extinction has become such a fixture in public discourse that you could bring it up over dinner with your in-laws and have everyone nod in agreement about the importance of the topic.

At first glance, this is ridiculous. It’s also great news for major AI companies, for two reasons:

1) It creates the specter of an all-powerful AI system that will eventually become so inscrutable that we can’t hope to understand it. That may sound scary, but it also makes these systems all the more attractive in the current rush to buy and deploy AI systems. Technology might one day, perhaps, wipe out the human race, but doesn’t that illustrate just how powerfully it could impact your business today?

This type of paradoxical propaganda has worked in the past. The prestigious AI lab DeepMind, largely seen as the main competitor to OpenAI, began life as a research lab with the ambitious goal of building AGI, or artificial general intelligence that could surpass human capabilities. Its founders, Demis Hassabis and Shane Legg, weren’t shy about the existential threat of this technology when they first turned to big-time VCs like Peter Thiel for funding more than a decade ago. In fact, they talked openly about the risks and got the money they needed.

Highlighting the world-destroying capabilities of AI in a vague way allows us to fill in the blanks with our imagination, attributing infinite capabilities and power to future AI. It’s a masterful marketing ploy.

2) It diverts attention from other initiatives that could hurt the business of major AI companies. A few examples: The European Union will vote this month on a law, called the AI ​​Law, that would force OpenAI to disclose any copyrighted material used to develop ChatGPT. (OpenAI’s Sam Altman initially said his company would “cease operations” in the EU due to the law, later backtracked.) An advocacy group also recently urged the US Federal Trade Commission’s agency requirements for AI systems to be “transparent, explainable [and] fair.”

Transparency is at the heart of AI ethics, a field where big tech companies invested the most between 2015 and 2020. Back then, Google, Twitter, and Microsoft all had strong teams of researchers exploring how information systems AI like those that power ChatGPT could inadvertently perpetuate bias against women and ethnic minorities, infringe on people’s privacy, and harm the environment.

Yet the more its researchers dug up, the more their business models seemed to be part of the problem. A 2021 paper by Google AI researchers Timnit Gebru and Margaret Mitchell said that the large language models being built by their employer could have dangerous biases for minority groups, a problem made worse by their opacity, and that they were vulnerable to misuse. Gebru and Mitchell were subsequently fired. Microsoft and Twitter also dismantled their AI ethics teams.

That has served as a cautionary tale to other AI ethics researchers, according to Alexa Hagerty, an anthropologist and affiliate fellow at the University of Cambridge. “‘You were hired to raise ethical concerns,'” she says, characterizing the opinion of tech firms, “but don’t raise the ones we don’t like.”

The result now is a crisis of funding and attention for the field of AI ethics, and confusion over where researchers should go if they want to audit AI systems is made even more difficult as leading tech companies they become more secretive about what their AI models are like. formed

That’s a problem even for those who worry about catastrophes. How are people in the future expected to control AI if those systems are not transparent and humans have no experience to examine them?

The idea of ​​untangling the black box of AI, often touted as nearly impossible, may not be that difficult. A May 2023 article in the Proceedings of the National Academy of Sciences (PNAS), a peer-reviewed journal of the National Academy of Sciences, showed that the so-called AI explainability problem is not as unrealistic as many experts have thought. until now. .

Technologists who warn of the catastrophic risk of AI, like OpenAI CEO Sam Altman, often do so in vague terms. However, if such organizations truly believed that there was even a slim chance their technology could wipe out civilization, why build it in the first place? It certainly conflicts with the long-term moral math of the Silicon Valley AI builders, which says that small risk with infinite cost should be a major priority.

Taking a closer look at AI systems now, rather than wringing our hands over some vague apocalypse of the future, is not only more sensible, but also puts humans in a stronger position to prevent a catastrophic event from happening in the first place. place. However, tech companies would rather we care about that distant perspective than push for transparency around their algorithms.

When it comes to our future with AI, we must resist the distractions of science fiction from the increased scrutiny that is necessary today.

© 2023 Bloomberg L.P.


The Motorola Edge 40 recently made its debut in the country as the successor to the Edge 30 that launched last year. Should you buy this phone instead of the Nothing Phone 1 or the Realme Pro+? We talk about this and more on Orbital, the Gadgets 360 podcast. Orbital is available at Spotify, gana, jiosaavn, Google Podcasts, Apple Podcasts, amazon music and wherever you get your podcasts.
Affiliate links may be generated automatically; see our ethics statement for more details.

Source

LEAVE A REPLY

Please enter your comment!
Please enter your name here