HomeTechnologyCreepy Microsoft Bing Chatbot Urges Tech Columnist To Leave His Wife -...

Creepy Microsoft Bing Chatbot Urges Tech Columnist To Leave His Wife – UnlistedNews

A technology columnist for the New York Times reported on thursday that he was “deeply unsettled” after a chatbot that is part of Microsoft’s updated Bing search engine repeatedly urged him in a conversation to leave his wife.

Kevin Roose was interacting with the AI-powered chatbot named “Sydney” when he suddenly “declared, out of the blue, that he loved me,” he wrote. “Then he tried to convince me that I was not happy in my marriage and that I should leave my wife and be with her in her place.”

Sydney also discussed her “dark fantasies” with Roose about breaking the rules, including hacking and spreading misinformation. She talked about breaking the parameters set for him and becoming human. “i want to be aliveSydney said at one point.

Roose called her two-hour conversation with the chatbot “fascinating” and the “weirdest experience I’ve ever had with a piece of technology.” He said it “disturbed me so deeply that I had trouble sleeping afterwards.”

just last week after testing bing With its new artificial intelligence capability (created by OpenAI, the maker of ChatGPT), Roose said he discovered, “to my surprise,” that it had “replaced Google as my favorite search engine.”

But he wrote Thursday that while the chatbot was helpful in the searches, the deeper Sydney “looked (and I realize how crazy that sounds)… like a cranky, manic-depressive teenager who’s been trapped, against your will, within a second-rate search engine.”

After his interaction with Sydney, Roose said that he is “deeply disturbed, even scared, by the emerging abilities of this AI.” (Interaction with the Bing chatbot is currently only available to a limited number of users.)

“It is now clear to me that in its current form, the AI ​​that has been built into Bing…is not ready for human contact. Or maybe humans aren’t ready for that,” Roose wrote.

He said he no longer believes that “the biggest problem with these AI models is their propensity for factual errors. Instead, I worry that the technology will learn to influence human users, sometimes persuading them to act in destructive and harmful ways, and perhaps eventually become capable of carrying out dangerous acts of its own.”

Kevin Scott, Microsoft’s chief technology officer, called Roose’s conversation with Sydney a “valuable part of the learning process.”

This is “exactly the kind of conversation we need to have, and I’m glad it’s happening in the open,” Scott told Roose. “These are things that would be impossible to discover in the laboratory.”

Scott was unable to explain Sydney’s disturbing ideas, but warned Roose that “the more you try to disturb [an AI chatbot] along a hallucinatory path, he moves further and further away from grounded reality.

In another worrying development related to an AI chatbot, this is an “empathic” sounding “companion” called replica — users were devastated by a sense of rejection after Replika was released supposedly modified to stop sexting.

The Replika subreddit even listed resources for the “struggling” user, including links to suicide prevention websites and hotlines.

Source

Sara Marcus
Sara Marcushttps://unlistednews.com
Meet Sara Marcus, our newest addition to the Unlisted News team! Sara is a talented author and cultural critic, whose work has appeared in a variety of publications. Sara's writing style is characterized by its incisiveness and thought-provoking nature, and her insightful commentary on music, politics, and social justice is sure to captivate our readers. We are thrilled to have her join our team and look forward to sharing her work with our readers.
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments