Return to the Library

What Topics Can Be Discussed in the China-U.S. Artificial Intelligence Dialogue


Meeting in San Francisco in November 2023, Biden and Xi agreed to launch U.S.-China talks on the risks associated with advanced AI systems and potential areas for bilateral collaboration. In this piece, researchers at Tsinghua University detail where Washington and Beijing’s interests on AI issues might converge, and what they see as the most fruitful areas for discussion. While there is some consensus on basic principles around AI in the defense sphere, they argue, more fruitful discussions will center on non-traditional security fields – including the social governance challenges engendered by AI and the application of AI toward anti-crime and anti-terrorism objectives.

FacebookTwitterLinkedInEmailPrintCopy Link
Original text
English text
See an error? Drop us a line at
View the translated and original text side-by-side

On November 15, 2023, Chinese President Xi Jinping met with U.S. President Joe Biden at the Filoli Estate in San Francisco. Among a series of important consensuses reached by both sides was the establishment of a China-U.S. intergovernmental dialogue mechanism on artificial intelligence. This not only sent a positive signal to ease the increasingly intense digital competition between the two countries but also injected a new driving force into maintaining dialogue and cooperation in various fields between the two countries. Especially at a time when international public opinion is increasingly concerned about the security risks of artificial intelligence, this new dialogue channel established by China and the U.S. provides confidence in promoting global governance in related fields.


Strengthening Global Governance of Artificial Intelligence Meets International Expectations


With the global popularity of ChatGPT sparking widespread attention to this technology, global governance of artificial intelligence entered the fast lane in 2023. A representative phenomenon is the technology and industry sectors’ high-profile calls for attention to the “loss of control risk” of artificial intelligence. On March 29, over a thousand industry executives, experts, and researchers, including Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, and the CEO of British open-source AI company Stability AI, Alex Mostaque, signed an open letter calling for a “pause in the development of AI systems more powerful than GPT-4 for at least six months” to avoid “the end of human civilization by machines.” On May 31, 350 industry executives and experts in the field of artificial intelligence publicly warned that AI could pose “existential social risks” to humanity like nuclear weapons and pandemics, sparking high concern among the global public about the potential ethical risks of AI. On June 12, UN Secretary-General António Guterres emphasized the need to take seriously the warnings about generative artificial intelligence. Compared to the threat of nuclear weapons to human survival, artificial intelligence has not yet shown disruptive destructive power, and the so-called extinction of humanity by super artificial intelligence carries a certain color of science fiction and romanticism. Nevertheless, international public opinion is full of expectations for major countries to cooperate in governance to prevent “technological loss of control.” As major countries in artificial intelligence technology and applications, China and the U.S. need to assume corresponding international governance responsibilities and respond to the main concerns of international public opinion.

伴随着ChatGPT在全球爆火引发大众对这项技术的高度关注,人工智能全球治理在2023年步入快车道,一个代表性的现象是科技和产业界高调呼吁重视人工智能的“失控风险”。3月29日,特斯拉公司CEO马斯克、苹果公司联合创始人沃兹尼亚克、英国开源人工智能公司Stability AI的CEO莫斯塔克等1000多名业界高管、专家、科研人员签署公开信,呼吁“暂停研发比GPT-4更强大的人工智能系统至少六个月”,从而避免“人类文明被机器终结”。5月31日,350名人工智能领域的行业高管、专家公开警告人工智能可能像核武器和大流行病一样给人类带来“灭绝性社会风险”,引发全球舆论对人工智能潜在伦理风险的高度关注。6月12日,联合国秘书长古特雷斯强调,要认真对待专家关于生成式人工智能的警告。同核武器威胁人类生存相比,人工智能尚未展现出颠覆性的破坏力,所谓超级人工智能将灭绝人类带有一定的科幻主义和浪漫主义猜想色彩。尽管如此,国际舆论对大国开展治理合作以防止“技术失控”充满期待。中美同为人工智能技术和应用大国,都需要承担相应的国际治理责任,对国际舆论的主要关切作出回应。

Looking back at the evolution of the global artificial intelligence system over the past decade, the construction of international governance mechanisms and norms surged in 2023, creating conditions for China and the U.S. to discuss AI governance. On one hand, the UN Convention on Certain Conventional Weapons (CCW) has been discussing military applications of artificial intelligence represented by lethal autonomous weapons since 2014 and established 11 principles for the governance of lethal autonomous weapons in 2019. To this day, the Convention continues to play a central platform role in the international community’s discussion of governing the risks of military applications of artificial intelligence. On July 12, 2023, the UN Security Council discussed the issue of artificial intelligence for the first time, and UN Secretary-General Guterres proposed establishing a scientific advisory committee composed of artificial intelligence experts and chief scientists of UN agencies, drawing on the experience of the International Atomic Energy Agency (IAEA). On October 26, 2023, this high-level advisory body on AI at the UN, which includes Chinese and American experts, was officially established. On the other hand, regional “AI Safety Summits” were held intensively, with the Netherlands, Luxembourg, Costa Rica, and the United Kingdom hosting international summits in 2023 on AI and the control and governance of autonomous weapons, issuing related governance initiatives or joint statements. Among them, the “AI Safety Summit” hosted by the UK and its post-summit outcome, the Bletchley Declaration, garnered the most attention. China, the U.S., the European Union, Australia, and 28 other countries and regions signed the declaration, agreeing to take action to identify common AI risks and develop technical safety measures and international cooperation.


China-U.S. Artificial Intelligence Dialogue


China and the U.S. need to stabilize their relationship in the field of AI governance and have common interests and responsibilities in ensuring the healthy development and safe use of technology to create human well-being. Identifying the areas of AI governance where dialogue between China and the U.S. is most urgent, and assessing which policy advocacy of both sides may lead to consensus, is an important initial path to implementing the consensus of the “San Francisco Vision” for establishing an intergovernmental dialogue on AI between the two countries.


In the area of AI safety governance, China and the U.S. have more room to form consensus in non-traditional security fields than in traditional security fields. In terms of traditional security fields, possible consensus points between China and the U.S. include: voluntarily strengthening the transparency of positions on military applications of AI and communication on crisis prevention, crisis management, and prevention of arms races related to military intelligence issues. Although the international community’s concern about AI’s involvement in strategic decision-making and execution is rising, and worries about the so-called “impact on strategic stability” are increasing, major AI powers are unlikely to reach a consensus on related arms control issues in the short term. On February 16, 2023, the U.S. released the “Political Declaration on Responsible Use of Artificial Intelligence and Autonomous Technologies in the Military” at the “Responsible Use of Artificial Intelligence in the Military Field Summit” in The Hague, Netherlands. On November 21, the U.S. State Department announced that 46 countries support the revised version of the declaration. Notably, the new version of the declaration removed the content to ensure human decision-making rights on nuclear weapon launches. The reason for the U.S. adjustment is that the sensitivity of strategic forces, the ubiquity of intelligent technology, and the difficulty of carrying out inspections make it too difficult to promote related arms control discussions, let alone establish related arms control norms between nuclear and non-nuclear countries.


In the field of non-traditional security, the two countries can strive to form a common norm on basic principles of AI governance. Although there are differences in the expression of AI technology governance principles between China and the U.S., the connotations of many core concepts of both sides are similar, both emphasizing human control over machines, both opposing discrimination in the research and development and application of AI products against different or specific nationalities, religions, and beliefs, and both supporting the idea that AI development should aim to enhance human well-being. Then, is it possible for China and the U.S. to discuss a common text on these governance principles? On October 18, 2023, the Chinese government issued the Global Artificial Intelligence Governance Initiative, which includes 11 points. These reflect to a certain extent the core concerns of major countries including China and the U.S. about AI development and propose a scientific plan for constructing a global governance system, providing a basic framework for China and the U.S. to explore the construction of “AI Governance Guidelines” in the future. Considering that the two countries have already issued multiple documents on AI technology standards, legal regulations, and international governance, the two sides can use these as the basis for communication, focusing on forming a common understanding of technology concepts and their governance principles.


On the other hand, both China and the U.S. face social governance challenges triggered by the infiltration of AI into human life. AI not only magnifies challenges in protecting personal privacy, maintaining social equity, bridging the digital divide, and preventing terrorism but also adds structural pressure to both countries in addressing these social governance challenges. Faced with these non-traditional security challenges, China and the U.S. have a wide range of cooperative space in some functional areas, including: sharing their respective domestic AI governance practices; communicating and exchanging identification methods and standards for AI-generated videos and texts; jointly combating criminal and terrorist activities to prevent irresponsible technology applications from disrupting political stability and public safety; avoiding the misuse of export control tools to impact the stability of the global AI industry chain and supply chain, and so on.

另一方面,中美都面临人工智能渗透进人类生活后所引发的社会治理难题。人工智能不仅放大了保护个人隐私、维护社会公平、弥合数字鸿沟、防范恐怖主义等方面的挑战,还给两国应对这些社会治理难题增添了结构性压力。面对这些非传统安全挑战,中美在某些功能性领域存在广泛的合作空间,包括:分享各自国内人工智能治理的实践经验;就人工智能生成视频文字的鉴别方法和标准开展沟通交流;联合打击犯罪、恐怖主义活动, 以防范不负责任的技术应用破坏政治稳定和公共安全;避免滥用出口管制工具冲击全球人工智能产业链供应链稳定,等等。

In summary, there are many topics that can be discussed in the China-U.S. AI dialogue, which will have a significant leading role in global governance, and should be implemented as soon as possible, striving to become a pioneering project to put the “San Francisco Vision” into practice.


To top

Cite This Page

王一凡 (Wang Yifan), 朱荣生 (Zhu Rongsheng). "What Topics Can Be Discussed in the China-U.S. Artificial Intelligence Dialogue [中美人工智能对话可以谈些什么]". CSIS Interpret: China, original work published in World Affairs [世界知识], January 1, 2024

FacebookTwitterLinkedInEmailPrintCopy Link