Experts Say “Angry” Bing Chatbot Is Just Mimicking Humans

'Angry' Bing chatbot is just imitating humans, experts say

The Bing chatbot was designed by Microsoft and start-up OpenAI. (Representative)

San Francisco:

Microsoft’s nascent Bing chatbot is likely to be testy or even threatening because it essentially mimics what it learns from online conversations, analysts and academics said Friday.

Stories of troubling exchanges with chatbots that gained attention this week include artificial intelligence (AI) making threats about stealing nuclear codes, creating deadly viruses or wanting to survive.

“I think it’s basically mimicking conversations that it sees online,” said Graham Newbigg, an associate professor at Carnegie Mellon University’s Institute for Language Technology.

“So once the conversation takes a turn, it’s probably going to be like being in that angry state, or saying ‘I love you’ and other things like that, because it’s all stuff that’s been online before.” are done.”

A chatbot, by design, serves up words, predicting the most likely responses without understanding the meaning or context.

However, humans participating in banter with programs naturally read the emotion and intent of what the chatbot says.

Programmer Simon Willison said in a blog post, “Large language models have no concept of ‘truth’ – they just know how to complete a sentence that is statistically possible based on their input and training set.” Are.”

“So they make things up, and then tell them with extreme conviction.”

Laurent Daudet, co-founder of the French AI company LightOn, theorized that the chatbots were seemingly trained on rogue exchanges that themselves became aggressive or inconsistent.

“Addressing this requires a lot of effort and a lot of human response, which is why we have chosen to limit ourselves to commercial uses and not more interactive ones,” Daudet told AFP.

‘off the rails’

The Bing chatbot was designed by Microsoft and start-up OpenAI, which has been causing a sensation since the November launch of ChatGPT, the headline-grabbing app capable of generating all kinds of written content in seconds upon a simple request. .

Ever since ChatGPT appeared, the technology behind it, known as generative AI, has been fueling fascination and concern.

Microsoft said in a blog post, “The model sometimes attempts to respond or reflect the tone in which it is being asked to respond (and) that can lead to a style we don’t intend.” Were.” work in progress.

The Bing chatbot stated in some shared exchanges that it was named “Sydney” during development, and was given rules of behavior.

Those rules include that “Sydney’s responses must also be positive, interesting, entertaining and engaging,” according to the online post.

The troubling dialogue that mixes steely threats and professions of love may be due to dialectical instructions to remain positive while mimicking what AI has mined from human exchanges.

Yoram Wurmser, principal analyst at eMarketer, told AFP that chatbots appear to be more prone to annoying or bizarre responses during long conversations, losing a sense of where the exchange is going.

“They can really derail,” Wurmser said.

“It’s very lifelike, because[the chatbot]is very good at predicting next words which makes it seem like it has feelings or gives it human qualities; but it’s still statistical output.”

(This story has not been edited by NDTV staff and was auto-generated from a syndicated feed.)

featured video of the day

Indian government targets US billionaire George Soros over PM’s remarks