Scientists warn of dangers of artificial intelligence but do not agree on a solution

Computer scientists who helped build the foundation of today’s artificial intelligence technology are warning of its dangers, but that doesn’t mean they agree what those dangers are or how to prevent them.

After retiring from Google so he can speak more freely, the so-called godfather of AI Geoffrey Hinton plans to outline his concerns Wednesday at a conference at the Massachusetts Institute of Technology. He has already expressed regret about his deed and doubts humanity’s survival if machines become smarter than people.

Fellow AI pioneer Yoshua Bengio, Hinton’s co-winner of the top computer science prize, told The Associated Press on Wednesday that he is “very much aligned” with Hinton’s concerns brought about by chatbots like ChatGPT and related technology, but worries that just Saying “we’re doomed” isn’t going to help.

“The main difference, I would say, is he’s a pessimistic person, and I’m more on the optimistic side,” said Bengio, a professor at the University of Montreal. “I think the threats – short-term ones, long-term ones – are very serious and need to be taken seriously not only by some researchers but also by governments and populations.”

There are plenty of signs that governments are listening. The White House has called on the CEOs of Google, Microsoft and ChatGPT-maker OpenAI to meet with Vice President Kamala Harris on Thursday for what officials described as a candid discussion on how to mitigate near-term and long-term risks. Used to be. His technique. European lawmakers are also ramping up talks to pass sweeping new AI rules.

But all the talk of the most dire future threats has some worried that the hype around superhuman machines – which don’t yet exist – is distracting from efforts to install practical safeguards on existing AI products that are largely are irregular.

Margaret Mitchell, a former leader of Google’s AI ethics team, said she is troubled that Hinton hasn’t spoken up during her decade in a position of power at Google, especially after the 2020 ouster of chief black scientist Timnit Gebru, who studied the pitfalls of large language models before they were widely commercialized in products such as ChatGPT and Google’s Bard.

“It is a privilege that he now jumps to the realities of discrimination, the promotion of hate speech, toxicity and non-consensual pornography of women, all issues that are actively harming those who are marginalized in tech. Mitchell was also ousted from Google after Gebru’s departure. “He’s giving up all those things to worry about something far away.”

Bengio, Hinton and a third researcher, Yann LeCun, who works at Facebook parent Meta, were all awarded the Turing Prize in 2019 for their breakthroughs in the field of artificial neural networks, which are central to the development of today’s AI applications such as ChatGPT. be important for.

Bengio, the only one of the three who did not land a job with a tech giant, has expressed concern about near-term AI risks, including the threat of job market volatility, automated weapons and biased data sets.

But those concerns have grown recently, prompting Bengio to join other computer scientists and tech business leaders such as Elon Musk And Apple co-founder Steve Wozniak called for a six-month pause on developing AI systems more powerful than OpenAI’s latest model, GPT-4.

Bengio said Wednesday that he believes the latest AI language models already pass the “Turing test,” named after British codebreaker and AI pioneer Alan Turing’s method, which was introduced in the 1950s to measure When AI becomes indistinguishable from a human – at least on the surface.

“This is a milestone that could have serious consequences if we are not careful,” Bengio said. “My main concern is how they can be exploited for nefarious purposes to destabilize democracy, cyber attacks, propaganda. You can interact with these systems and think you are interacting with a human. It is difficult to trace them.

Where researchers are less likely to agree is how current AI language systems – which have many limitations, including a tendency to fabricate information – will actually become smarter than humans.

Aidan Gomez was one of the co-authors of a pioneering 2017 paper that used a so-called transformer technique—the key of the “t” at the end of ChatGPT—to improve the performance of machine-learning systems, specifically how they learn from passages. Started it. of the text. Then just a 20-year-old intern at Google, Gomez remembers lying on a couch at the company’s California headquarters when her team sent out the paper around 3 a.m. when it was due.

“Eden, this is going to be huge,” he remembers telling a colleague about the work, which has since helped lead to new systems that can generate human prose and fiction. .

Six years later and now CEO of his own AI company, Foghere, Gomez is excited about the potential applications of these systems, but troubled by the apprehension, he says, of their true capabilities “detached from reality” and ” depends on extraordinary leaps of imagination and thought.”

“The notion that these models are somehow going to gain access to our nuclear weapons and launch some sort of extinction-level event is not a productive discourse,” Gomez said. “It’s detrimental to the actual practical policy efforts that are trying to do some good.”

read all Latest Tech News Here

(This story has not been edited by News18 staff and is published from a syndicated news agency feed)