AI poses extinction-level threat to humans: US government report – Times of India

NEW DELHI: Urgent national security concerns from AI development
The US government has been urged to act “quickly and decisively” to mitigate significant national security risks posed by artificial intelligence (AI), which, in the worst-case scenario, could present an “extinction-level threat to the human species,” according to a government-commissioned report.
Destabilizing global security
The report highlights the dangers associated with the rise of advanced AI and artificial general intelligence (AGI), likening their potential global security impact to that of nuclear weapons. Although AGI remains hypothetical, the pace at which AI labs are working toward such technologies suggests its arrival could be imminent.
Insights from industry experts
The report’s authors, after consulting with over 200 individuals from the government, AI experts, and employees at leading AI companies, have outlined the internal concerns regarding safety practices within the AI industry, particularly at forefront companies like OpenAI, Google DeepMind, Anthropic, and Meta.
Zoom In
The report’s focus on the “weaponization risk” and “loss of control” risk underlines the dual threats posed by rapidly evolving AI capabilities. It warns of a dangerous race among AI developers, spurred by economic incentives, which could sideline safety considerations.
The big picture
As AI technology races ahead, exemplified by tools like ChatGPT, the call for robust regulatory measures is growing louder. The proposal includes unprecedented actions such as making it illegal to train AI models beyond specific computing power levels and forming a new federal AI agency to oversee this burgeoning field.
The role of hardware and advanced technology regulation
The document also calls for increased control over the manufacturing and export of AI chips and emphasizes the importance of federal funding towards AI alignment research. The proposal includes measures to manage the proliferation of high-end computing resources essential for training AI systems.
The “Gladstone Action Plan” aims to increase the safety and security of advanced AI to counteract catastrophic national security risks stemming from AI weaponization and loss of control. The plan calls for US government intervention through a series of measures:
Interim safeguards: Implementing interim safeguards such as export controls to stabilize advanced AI development. It includes creating an AI Observatory for monitoring, setting responsible AI development and adoption safeguards, establishing an AI Safety Task Force, and imposing controls on the AI supply chain.
Capability and capacity: Strengthening the US government’s capability and capacity for advanced AI preparedness through education, training, and development of a response framework.
Technical investment: Boosting national investment in AI safety research and developing safety and security standards to address the rapid pace of AI advancements.
Formal safeguards: Establishing a regulatory agency for AI, the Frontier AI Systems Administration (FAISA), and setting a legal liability framework to cover long-term AI safety and security.
International law and supply chain: Enshrining AI safeguards in international law to prevent a global arms race in AI technology, establishing an International AI Agency, and forming an AI Supply Chain Control Regime with international partners.
The plan highlights the need for a “defense in depth” approach, offering multiple overlapping controls against AI risks and updating these as technology evolves. It acknowledges that AI development is complicated and constantly changing, hence recommendations should be vetted by experts.
Political and industry challenges
Despite the compelling nature of these recommendations, they are likely to encounter significant political and industry resistance, given the current US government policies and the global nature of the AI development community.
Broader societal concerns
The report reflects growing public concern over AI’s potential to cause catastrophic events and the widespread belief that more government regulation is needed. These concerns are compounded by the rapid development of increasingly capable AI tools and the vast computing power being employed in their creation.