A group of experts says fears about the existential risk of AI are overstated

Superintelligence is not required for AI to cause harm. That’s already happening. AI is used to violate privacy, create and spread misinformation, compromise cyber-security, and create biased decision-making systems. The potential for military misuse of AI is imminent. Today’s AI systems help repressive regimes conduct mass surveillance and enforce powerful forms of social control. Controlling or reducing these contemporary losses is not only of immediate value, but is also the best option to reduce the potential, even if hypothetical. future risk.

It is safe to say that the AI ​​that exists today is not superintelligent. But it is possible that AI will be made superintelligent in the future. Researchers are divided on how soon this can happen, or whether it will happen at all. Still, today’s AI models are impressive, and arguably contain a form of intelligence and understanding of the world; Otherwise they would not be so useful. Yet they are easily fooled, liable to lie and sometimes fail to reason correctly. As a result, many contemporary pitfalls arise from AI’s limitations rather than its capabilities.

It is unclear whether AI, superintelligent or not, is best thought of as an alien entity with its own agency, or as part of an anthropogenic world like any other technology that is both shaped and constructed by humans. But for the sake of argument, let’s assume that at some point in the future a superintelligent AI will emerge that interacts with humanity under its own agency as an intelligent non-biological organism. Some X-risk-boosters suggest that such an AI would lead to human extinction by natural selection, which would overcome humanity with its superior intelligence.

Intelligence certainly plays a role in natural selection. But extinction is not the result of a struggle for dominance between “higher” and “lower” organisms. Rather, life is an interconnected web, with no top or bottom (consider the virtual indestructibility of the cockroach). Symbiosis and mutualism – mutually beneficial interactions between different species – are common, especially when one species is dependent on the other for resources. And in this case, AI is completely dependent on humans. From energy and raw materials to computer chips, manufacturing, logistics and network infrastructure, we are as fundamental to AI’s existence as oxygen producing plants are to us.

Perhaps computers can eventually learn to take care of themselves by ousting humans from their ecology? This would be akin to a fully automated economy, which is neither a desirable nor an inevitable outcome, possibly with or without superintelligent AI. Full automation is incompatible with current economic systems and, more importantly, it may be incompatible with human flourishing under any economic regime—recall the dystopia of Pixar’s “Wall-E.”

Fortunately, the road to automating all human labor is long. Each step presents a bottleneck (from an AI perspective) at which humans can intervene. In contrast, the information-processing labor that AI can perform at no cost presents both great opportunity and an immediate socioeconomic challenge.

Some may still argue that the AI ​​X-risk, even if unlikely, is so serious that prioritizing its mitigation is paramount. This echoes Pascal’s Wager, a 17th-century philosophical argument that held that it is rational to believe in God, if he is real, in order to avoid any possibility of the terrible fate of being condemned to hell. Pascal’s Wager, in both its original and AI versions, is designed to end rational debate by assigning infinite costs to uncertain outcomes.

In utilitarian analysis, in which costs are multiplied by probabilities, infinite times any probabilities other than zero is still infinite. So accepting the AI ​​X-risk version of Pascal’s Wager can lead us to conclude that AI research should be stopped altogether or strictly controlled by governments. This could undercut the emerging field of for-profit AI, or create cartels that hold onto AI innovation. For example, if governments pass laws limiting the legal right to deploy large generative language models such as ChatGPT and Bard to only a few companies, those companies could gain unprecedented (and undemocratic) power to shape social norms and the ability to extract rents on digital tools that could be vital to the 21st century economy.

Perhaps the rules could be designed in such a way as to minimize the potential for X-risk while also taking into account the immediate AI disadvantages? probably not; Proposals to curb AI X-risk are often in tension with proposals directed at existing AI harms. For example, rules limiting the open-source release of AI models or datasets make sense when the goal is to prevent the emergence of an autonomous networked AI beyond human control. However, such restrictions may impede other regulatory processes, for example to promote transparency or prevent monopolies in AI systems. In contrast, regulation that targets concrete, short-term risks – such as requiring AI systems to honestly disclose information about themselves – will help reduce long-term and even existential risks.

Regulators should not prioritize the existential risk posed by superintelligent AI. Instead, they must address the problems they face, make the model safer and their operations more predictable in line with human needs and norms. Regulations should focus on preventing inappropriate deployment of AI. And political leaders must re-imagine a political economy that promotes transparency, competition, fairness, and the flourishing of humanity through the use of AI. This would go a long way in preventing today’s AI risks, and a step in the right direction toward mitigating more existential, albeit hypothetical, risks.

Blaise Aguera y Arcas is a Fellow at Google Research, where he leads a team working on artificial intelligence. This piece was co-written with Blake Richards, Associate Professor at McGill University and CIFAR AI Chair at the Mila-Quebec AI Institute; Dhanya Sridhar, Assistant Professor at Université de Montreal and CIFAR AI Chair at the Mila-Québec AI Institute; and Guillaume Lajoie, associate professor at Université de Montréal and CIFAR AI Chair at the Mila-Québec AI Institute.

©️ 2023, The Economist Newspaper Limited. All rights reserved.

From The Economist, published under license. Original content can be found at www.economist.com