Tackling risks from AI should be a “global priority”, experts say

Tackling risks from AI should be a 'global priority', experts say

The runaway success of ChatGPT sparked a gold rush with billions of dollars in investment. (Representative)

Paris:

A group of industry chiefs and experts warned on Tuesday that global leaders must act to reduce the “risk of extinction” from artificial intelligence technology.

The one-line statement, signed by dozens of experts including Sam Altman, whose firm OpenAI created the ChatGPT bot, said tackling risks from AI is “a global priority alongside other societal-level risks such as pandemics and nuclear war”. Should be

ChatGPT hit the headlines late last year, demonstrating its ability to generate essays, poems and conversations from the briefest of prompts.

The program’s wild success sparked a gold rush with billions of dollars invested in the region, but critics and insiders sounded the alarm.

Common concerns include the possibility that chatbots could flood the web with misinformation, that biased algorithms would churn out racist content, or that AI-powered automation could ruin entire industries.

superintending machines

The latest statement placed on the website of the US-based non-profit Center for AI Security gave no details of the potential existential threat posed by AI.

The center said the “brief statement” was meant to open a discussion on the dangers of the technology.

Several signatories, including Geoffrey Hinton, who coined some of the technology under AI Systems and is known as one of the industry’s godfathers, have made similar warnings in the past.

His greatest concern is the rise of so-called Artificial General Intelligence (AGI) – a loosely defined concept for a moment when machines become capable of a broad range of tasks and can develop their own programming.

The fear is that humans will no longer have control over the supervillain machines, which experts warn could have devastating consequences for the species and the planet.

Dozens of academics and experts from companies including Google and Microsoft — both leaders in the AI ​​field — signed the statement.

It comes two months after Tesla boss Elon Musk and hundreds of others issued an open letter calling for the development of such technology to be halted until it can be shown to be safe.

However, Mr Musk’s letter sparked widespread criticism that the dire warnings of societal collapse were grossly exaggerated and often mirrored the talking points of AI boosters.

US academic Emily Bender, who co-authored an influential letter criticizing AI, said the March letter signed by hundreds of notable figures was “dripping with AI propaganda”.

‘Surprisingly non-partisan’

Bender and other critics point to AI firms’ refusal to publish the sources of their data or to reveal how it is processed — a so-called “black box” problem.

Among the criticisms is that the algorithms can be trained on racist, sexist or politically biased content.

Sam Altman, who is currently touring the world to help shape the global conversation around AI, has hinted several times at the global threat posed by the technology his firm is developing.

“If something goes wrong with the AI, no gas mask will help you,” he told a small group of journalists in Paris last Friday.

But he defended his firm’s refusal to publish the source data, saying critics really just wanted to know whether the models were biased.

“How it does on the racial bias test is what matters there,” he said, adding that the latest model was “surprisingly non-biased.”

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)