AI Safety and Geopolitical Competition

AI Safety and Geopolitical Competition

Actors interested in AI Safety should lean into the political and structural challenges of dialogue.

We are still in the early stages of AI’s transformation of society, and many experts are focused on aligning AI with human values and implementing regulations to prevent potentially catastrophic risks.1 While valuable, these conversations are undermined by countervailing incentives of competition and growing geopolitical tensions. To be constructive, future dialogue should accept political competition as a factor in AI development – and seek avenues by which that competition can promote alignment, safety, and development. Put simply, the AI safety community must recognize the political valence of international competition.

Who’s Afraid of AI?

The United States and China are running full speed ahead with AI development, regardless of risks. The Trump Administration axed the Biden-era executive order (EO) on AI and issued its own which seeks to advance American development with “fewer regulatory burdens.”2 This, combined with staff cuts at the National Institute of Standards and Technology (NIST) and other federal agencies, suggest a focus on deregulation. The Trump Administration has also incentivized development of energy infrastructure for AI.3 Vice President J.D. Vance’s speech at the 2025 Paris AI Action summit is also instructive: he outlined administration plans to promote American leadership in AI and institute policies to prevent unnecessary regulatory burdens.4 Simultaneously, there have been initial discussions about a second, official bilateral dialogue (also known as a “Track I” dialogue) between the United States and China on international AI governance and policy.

While continuing to participate in international discussions focused on AI governance and allow for nongovernment, expert-level (“Track II”) international dialogue around AI, the Trump Administration is also actively cutting regulations and limiting China’s chip access. For the administration, AI is a “race” that America can win. There is no strong incentive to slow down or compromise.

China’s government has a similar approach. Although some Chinese experts are concerned about AI safety, the Chinese government has remained ambiguous on the topic. Furthermore, recent policy discussions suggest a stronger focus on acceleration at the expense of safety. An article in the People’s Daily, the Party’s leading mouthpiece, highlighted Xi Jinping’s emphasis on “seizing the historic opportunity of AI development.”5 Such signals are more impactful than regulations: they message to agencies, local governments, state-owned enterprises (SOEs), and private actors that investment in and diffusion of AI is not only politically correct but also practically expedient. There are growing social and material upsides to developing AI, and fewer downsides. In early March, the National Development and Reform Commission announced a 1 trillion yuan ($138 billion) state-backed VC fund to further the development of AI and other emerging technologies.6 Following DeepSeek’s international success, Chinese institutions are rushing to integrate DeepSeek into their operations.7 In other words, diffusion is accelerating.

Internationally, China is amplifying its soft power. Its delegation at the Paris summit announced the creation of a nongovernmental “AI Development and Safety Network” for coordinating conversations around AI governance within China and enhancing Chinese international engagement.8 This looks like a move aimed at centering safety within China’s AI governance approach. However, the influence of this new “network” should not be overestimated – it remains unofficial, and the Chinese government has not established their own version of a national AI Safety Institute. This is due to three key reasons: a lack of government consensus about the goals of a potential Chinese AI Safety Institute, bureaucratic competition within China’s AI ecosystem, and an unwillingness to commit to international agreements that could limit China’s future options. Except for Zhipu AI, Chinese companies have not signed onto international safety agreements, such as the Seoul Declaration, likely at the behest of their government.9 (Instead, they signed on to commitments released by a Chinese government thinktank.10)

The weakness of this nongovernmental Chinese AI safety “network” and its AI governance approach is its lack of power. The choices of Chinese entrepreneurs and AI developers are crucial to AI’s future, but their influence over government decision-making is limited. The Party refuses to surrender any key decision-making power to tech companies; these companies will similarly refuse key decision-making power to developers concerned about AI safety. Why can’t China create its own AI Safety Institute? Why can’t Chinese AI developers join international AI safety commitments? It seems that Chinese leadership does not want to and does not want its companies to. Party leadership remains in control.

The Risks Are Not Real Enough –Yet

Set aside that the world’s two leading ecosystems of AI development have an ambiguous relationship with AI safety. The various concepts of “AI safety,” particularly comparisons with nuclear weapons, face other problems related to coherence, normative value, and practical implementation.

AI risks are often compared to nuclear weapons, particularly regarding existential threats and the need for states to collaborate to prevent apocalyptic scenarios.11 As recently as March 2025, Eric Schmidt and Dan Hendrycks compared superintelligent AI to nuclear weapons in terms of risk.12 There are, however, real limits to the nuclear analogy and the value of mutually assured destruction (MAD) as a guiding paradigm in AI governance. While nuclear power is centralized by state actors, AI is developed mostly by private actors in a distributed manner. It is easier for states to monitor nuclear technology development than to monitor and limit AI development, domestically or internationally.13 Even if states agreed to limit AI to specific tasks and applications, creating legal instruments and enforcement mechanisms that realize those limitations without expanding state power would be challenging. Overemphasizing AI safety might lead to unintended consequences.

Another weakness of the nuclear analogy is the limited uses of nuclear technology compared to AI. Nuclear technology is dual-use, energy or destruction, and nuclear weapons themselves have just one purpose.14 But AI is the mother of multi-use technologies: it can be applied to most forms of human behavior and interaction, from the most innocuous to the most lethal. Even the national security vectors of AI use are much broader than nuclear technology. AI can be used in espionage, surveillance, targeting, or other security applications. The limited use cases of nuclear weapons make international agreements more feasible. With AI, states may limit some uses of AI but not others, leading to disagreements about which uses should be prohibited.

Other dynamics limit the applicability of a MAD paradigm. For MAD to work, a mutual view of risk is necessary, but insufficient by itself. In other words, the United States and China could both agree that rapid AI development poses risks that are in neither country’s interests, but still fail to coordinate on addressing those risks. That is because three dynamics are required for MAD to incentivize behavior.

The first requirement is a joint understanding of risks; many concerns about AI risks are serious but abstract and subject to different interpretations by different states. Nobody wants terminators, but there are different views as to what constitutes progress toward terminators. The second requirement has to do with verifying agreement compliance. As America learned during the Cold War, trust requires verification. The third requirement is for concrete, credible costs to noncompliance. Within nuclear MAD, noncompliance risked war and, in all likelihood, utter destruction. Similarly strong incentives in the AI space are simply not there – yet.

In recent years, there have been multiple “Sputnik” moments, from ChatGPT-4 to DeepSeek’s models. The original Sputnik led to the space race–an acceleration. It was the Cuban Missile crisis that led to more dialogue, control, and détente. What sort of AI-related event(s) might have an analogous effect as the Cuban Missile Crisis? It is a question worth pondering.

No Great Expectations, But Realistic Ones

An America that appears uninterested in advancing safety commitments and a China that prioritizes competition presents challenges for limiting AI development in the name of safety. Therefore, transnational “safety commitments” are unlikely to reliably constrain AI developer behavior or prevent “unsafe” behavior. Safety is, after all, subjective.

In addition, recent advancements in AI challenge the paradigm of AI-as-existential-risk because they both exacerbate the dynamic by which technology outpaces regulatory thinking and because these recent developments have been, to be fair, largely positive.15 “AI competition” has not ended humanity yet. Instead, it has produced a variety of improved models that are available for more people. AI risks are still real, and it is still possible for governments to seriously coordinate on alleviating those risks. But the burden of arguing that “safety” should take priority or that it does not impact development and diffusion has grown.

The point is not that concerns about existential risks are valid or invalid; rather, that the incentive structure for governments and other actors to respond to these risks is inadequate. Governments are clearly pursuing other goals. For those interested in persuading them to prioritize AI safety, simply possessing the correct theoretical arguments about AI risk, or even how to address it, is not enough. These advocates must lean into the political and geopolitical challenges and develop mechanisms that provide governments with: (1) a better understanding of the risks, (2) a means of verifying whether other actors are addressing the risks, and (3) real consequences for noncompliance.

Dialogues should continue with two focuses. First, we should recognize how political and geopolitical realities shape governmental incentives and limit the feasibility of AI safety goals. Second, we should move beyond shared principles toward shared actions. Dialogues should unite actors able to actualize AI governance goals and verify compliance.

International AI safety dialogues will continue as countries remain curious about others’ progress, and as AI risks grow more concrete. This discourse will prepare countries and other entities to act. However, we should recognize there are political and structural challenges to implementing concrete limitations on AI development. All lights are green for now.

  1. Melanie Mitchell, “What Does It Mean to Align AI With Human Values,” Quanta Magazine, December 13, 2022; Future of Life Institute, “About Us”; Machine Intelligence Research Institute, “The Problem”; China AI Safety & Development Association, “Homepage”; National Institute of Standards and Technology, “AI Safety Institute”. ↩︎
  2. The White House, Initial Recissions of Harmful Executive Orders and Actions, Executive Order, January 20, 2025; The White House, Removing Barriers to American Leadership in Artificial Intelligence, Executive Order, January 23, 2025. ↩︎
  3. Josh Boak and Zeke Miller, “Trump highlights partnership investing $500 billion in AI”, Associated Press, January 22, 2025, ↩︎
  4. “J.D. Vance: The Future of AI (English Subtitles),” YouTube video, posted by English Speeches, February 16, 2025; Élysée Palace, Sommet pour l’Action sur l’IA [Summit for Action on AI], 2025. ↩︎
  5. “抓住人工智能发展的历史性机遇” [Seizing the Historic Opportunity of AI Development], 人民日报 [People’s Daily], February 24, 2025. ↩︎
  6. “China to Set Up National Venture Capital Guidance Fund, State Planner Says,” Reuters, March 6, 2025. ↩︎
  7. Eleanor Olcott and Wenjie Ding, “DeepSeek spreads across China with Beijing’s Backing,” Financial Times, February 26, 2025. ↩︎
  8. AI Development and Safety Network, “Institutes.” ↩︎
  9. Scott Singer, “DeepSeek and Other Chinese Firms Converge with Western Companies on AI Promises,” Carnegie Endowment for International Peace, January 28, 2025. ↩︎
  10. “维护AI安全, 共建行业自律典范—首批17家企业签署《人工智能安全承诺》” [Maintaining AI Security: First 17 Enterprises Sign AI Safety Commitment], 中国信通院 [China Academy of Information and Communications Technology], February 24, 2024. ↩︎
  11. See, e.g., Kevin Klyman and Raphael Piliero, “AI and the A-bomb: What the analogy captures and misses,” Bulletin of the Atomic Scientists, September 9, 2024. Elon Musk has also compared AI to nuclear weapons: Elon Musk, “AI is More Dangerous than Nuclear Weapons,” CNBC, March 13, 2018. ↩︎
  12. Dan Hendrycks and Eric Schmidt, “The Nuclear-Level Risk of Superintelligent AI,” Time Magazine, March 6, 2025. ↩︎
  13. At least, while the resources required to develop frontier AI are easier for states to monitor, greater capabilities are diffusing faster and cheaper, making monitoring more difficult. ↩︎
  14. Yes, nuclear capabilities can be leveraged by a state in more nuanced ways. But the core function of nuclear weapons is binary: use them and destroy, or do not use them. ↩︎
  15. In December, OpenAI’s 03 excelled on the ARC test, measuring “adaptability and novelty” in LLMs, a breakthrough in kind, not just degree. François Chollet noted this advancement demonstrates “program synthesis” and unprecedented “abstraction and reasoning” capabilities (see François Chollet, “OpenAI o3 breakthrough high score on arc-agi-pub”, ARCPrize Blog, December 20, 2024). DeepSeek’s success demonstrates the power of open-source models, efficiency gains, inference training, and Chinese companies capacity for AI advancements. For deeper discussions of DeepSeek’s atypicality and success, see Lily Ottinger and Jordan Schneider, ”DeepSeek: What It Means and What Happens Next,” Chinatalk, February 1, 2025, and Jordan Schneider, “DeepSeek’s Secret to Success”, Chinatalk, January 30, 2025. ↩︎