On November 15, 2023, Chinese President Xi Jinping met with U.S. President Joe Biden at the Filoli Estate in San Francisco. Among a series of important consensuses reached by both sides was the establishment of a China-U.S. intergovernmental dialogue mechanism on artificial intelligence. This not only sent a positive signal to ease the increasingly intense digital competition between the two countries but also injected a new driving force into maintaining dialogue and cooperation in various fields between the two countries. Especially at a time when international public opinion is increasingly concerned about the security risks of artificial intelligence, this new dialogue channel established by China and the U.S. provides confidence in promoting global governance in related fields.
2023年11月15日,习近平主席在美国旧金山斐洛里庄园同美国总统拜登举行会晤,在双方达成的一系列重要共识里,建立中美人工智能政府间对话机制赫然在列。这不仅给缓和两国日趋激烈的数字竞争释放了积极信号,也为保持两国各领域对话合作注入新的驱动力。尤其在国际舆论越来越关注人工智能安全风险的当下,中美建立的这一新的对话渠道为推动相关领域全球治理提供了信心。
Strengthening Global Governance of Artificial Intelligence Meets International Expectations
加强人工智能全球治理符合国际期待
With the global popularity of ChatGPT sparking widespread attention to this technology, global governance of artificial intelligence entered the fast lane in 2023. A representative phenomenon is the technology and industry sectors’ high-profile calls for attention to the “loss of control risk” of artificial intelligence. On March 29, over a thousand industry executives, experts, and researchers, including Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, and the CEO of British open-source AI company Stability AI, Alex Mostaque, signed an open letter calling for a “pause in the development of AI systems more powerful than GPT-4 for at least six months” to avoid “the end of human civilization by machines.” On May 31, 350 industry executives and experts in the field of artificial intelligence publicly warned that AI could pose “existential social risks” to humanity like nuclear weapons and pandemics, sparking high concern among the global public about the potential ethical risks of AI. On June 12, UN Secretary-General António Guterres emphasized the need to take seriously the warnings about generative artificial intelligence. Compared to the threat of nuclear weapons to human survival, artificial intelligence has not yet shown disruptive destructive power, and the so-called extinction of humanity by super artificial intelligence carries a certain color of science fiction and romanticism. Nevertheless, international public opinion is full of expectations for major countries to cooperate in governance to prevent “technological loss of control.” As major countries in artificial intelligence technology and applications, China and the U.S. need to assume corresponding international governance responsibilities and respond to the main concerns of international public opinion.
伴随着ChatGPT在全球爆火引发大众对这项技术的高度关注,人工智能全球治理在2023年步入快车道,一个代表性的现象是科技和产业界高调呼吁重视人工智能的“失控风险”。3月29日,特斯拉公司CEO马斯克、苹果公司联合创始人沃兹尼亚克、英国开源人工智能公司Stability AI的CEO莫斯塔克等1000多名业界高管、专家、科研人员签署公开信,呼吁“暂停研发比GPT-4更强大的人工智能系统至少六个月”,从而避免“人类文明被机器终结”。5月31日,350名人工智能领域的行业高管、专家公开警告人工智能可能像核武器和大流行病一样给人类带来“灭绝性社会风险”,引发全球舆论对人工智能潜在伦理风险的高度关注。6月12日,联合国秘书长古特雷斯强调,要认真对待专家关于生成式人工智能的警告。同核武器威胁人类生存相比,人工智能尚未展现出颠覆性的破坏力,所谓超级人工智能将灭绝人类带有一定的科幻主义和浪漫主义猜想色彩。尽管如此,国际舆论对大国开展治理合作以防止“技术失控”充满期待。中美同为人工智能技术和应用大国,都需要承担相应的国际治理责任,对国际舆论的主要关切作出回应。
Looking back at the evolution of the global artificial intelligence system over the past decade, the construction of international governance mechanisms and norms surged in 2023, creating conditions for China and the U.S. to discuss AI governance. On one hand, the UN Convention on Certain Conventional Weapons (CCW) has been discussing military applications of artificial intelligence represented by lethal autonomous weapons since 2014 and established 11 principles for the governance of lethal autonomous weapons in 2019. To this day, the Convention continues to play a central platform role in the international community’s discussion of governing the risks of military applications of artificial intelligence. On July 12, 2023, the UN Security Council discussed the issue of artificial intelligence for the first time, and UN Secretary-General Guterres proposed establishing a scientific advisory committee composed of artificial intelligence experts and chief scientists of UN agencies, drawing on the experience of the International Atomic Energy Agency (IAEA). On October 26, 2023, this high-level advisory body on AI at the UN, which includes Chinese and American experts, was officially established. On the other hand, regional “AI Safety Summits” were held intensively, with the Netherlands, Luxembourg, Costa Rica, and the United Kingdom hosting international summits in 2023 on AI and the control and governance of autonomous weapons, issuing related governance initiatives or joint statements. Among them, the “AI Safety Summit” hosted by the UK and its post-summit outcome, the Bletchley Declaration, garnered the most attention. China, the U.S., the European Union, Australia, and 28 other countries and regions signed the declaration, agreeing to take action to identify common AI risks and develop technical safety measures and international cooperation.
回望过去十年人工智能全球体系的演进,国际治理机制和规范的建构在2023年迎来一股热潮,为中美开展人工智能治理对话创造了条件。一方面,联合国《特定常规武器公约》自2014年对以致命性自主武器为代表的人工智能军事应用展开讨论,于2019年确立了致命性自主武器治理的11条原则。时至今日,该公约在国际社会讨论治理人工智能军事应用风险方面持续着发挥核心平台的作用。2023年7月12日,联合国安理会首次讨论人工智能问题,联合国秘书长古特雷斯提出,参照国际原子能机构(IAEA)的经验,建立一个由人工智能专家和联合国机构首席科学家组成的科学顾问委员会。2023年10月26日,这个吸纳了中美专家的联合国人工智能高级别咨询机构正式成立。另一方面,区域性的“人工智能安全峰会”密集召开,荷兰、卢森堡、哥斯达黎加、英国等国均在2023年围绕人工智能、自主武器的军控和治理问题举办国际峰会,并且发布相关治理倡议或联合声明。其中,获得关注最多的是英国举办的“人工智能安全峰会”及其会后成果《布莱切利宣言》,中国、美国、欧盟、澳大利亚等28个国家和地区签署该宣言,同意开展行动来识别共同关注的人工智能风险,并且各自制定技术安全措施和开展国际合作。
China-U.S. Artificial Intelligence Dialogue
中美人工智能对话
China and the U.S. need to stabilize their relationship in the field of AI governance and have common interests and responsibilities in ensuring the healthy development and safe use of technology to create human well-being. Identifying the areas of AI governance where dialogue between China and the U.S. is most urgent, and assessing which policy advocacy of both sides may lead to consensus, is an important initial path to implementing the consensus of the “San Francisco Vision” for establishing an intergovernmental dialogue on AI between the two countries.
中美两国在人工智能治理领域不只有稳定彼此关系的需要,也有确保技术健康发展与安全利用共创人类福祉的共同利益和责任。而中美在哪些人工智能治理领域迫切需要进行沟通,评估双方哪些政策主张有可能导向共识,是落实两国元首会晤“旧金山愿景”关于建立两国人工智能政府间对话共识的重要初始路径。
In the area of AI safety governance, China and the U.S. have more room to form consensus in non-traditional security fields than in traditional security fields. In terms of traditional security fields, possible consensus points between China and the U.S. include: voluntarily strengthening the transparency of positions on military applications of AI and communication on crisis prevention, crisis management, and prevention of arms races related to military intelligence issues. Although the international community’s concern about AI’s involvement in strategic decision-making and execution is rising, and worries about the so-called “impact on strategic stability” are increasing, major AI powers are unlikely to reach a consensus on related arms control issues in the short term. On February 16, 2023, the U.S. released the “Political Declaration on Responsible Use of Artificial Intelligence and Autonomous Technologies in the Military” at the “Responsible Use of Artificial Intelligence in the Military Field Summit” in The Hague, Netherlands. On November 21, the U.S. State Department announced that 46 countries support the revised version of the declaration. Notably, the new version of the declaration removed the content to ensure human decision-making rights on nuclear weapon launches. The reason for the U.S. adjustment is that the sensitivity of strategic forces, the ubiquity of intelligent technology, and the difficulty of carrying out inspections make it too difficult to promote related arms control discussions, let alone establish related arms control norms between nuclear and non-nuclear countries.
在人工智能安全治理方面,中美在非传统安全领域形成共识的空间多于传统安全领域。就传统安全领域而言,中美可能找到的共识点包括:自愿性地加强人工智能军事应用立场的透明度,以及就涉军事智能问题开展危机预防、危机管控和防范军备竞赛等方面的沟通。尽管国际社会对人工智能介入战略决策和执行的关注度高涨,对所谓“战略稳定性受冲击”的担忧日甚,但是主要人工智能大国恐怕难以在短时间内就相关军控问题达成一致意见。2023年2月16日,美国在荷兰海牙“军事领域负责任使用人工智能峰会”上发布《关于在军事中负责任地使用人工智能和自主技术的政治宣言》。11月21日,美国国务院宣布已有46国支持该宣言的修改版。值得注意的是,新版宣言删去了确保人类决定核武器发射权的内容,美方做此调整的原因是,战略力量敏感度高、智能技术泛在性强、开展核查难度大,使得推动相关军控讨论过于困难,而让有核国和无核国确立相关军备控制规范更是难上加难。
In the field of non-traditional security, the two countries can strive to form a common norm on basic principles of AI governance. Although there are differences in the expression of AI technology governance principles between China and the U.S., the connotations of many core concepts of both sides are similar, both emphasizing human control over machines, both opposing discrimination in the research and development and application of AI products against different or specific nationalities, religions, and beliefs, and both supporting the idea that AI development should aim to enhance human well-being. Then, is it possible for China and the U.S. to discuss a common text on these governance principles? On October 18, 2023, the Chinese government issued the Global Artificial Intelligence Governance Initiative, which includes 11 points. These reflect to a certain extent the core concerns of major countries including China and the U.S. about AI development and propose a scientific plan for constructing a global governance system, providing a basic framework for China and the U.S. to explore the construction of “AI Governance Guidelines” in the future. Considering that the two countries have already issued multiple documents on AI technology standards, legal regulations, and international governance, the two sides can use these as the basis for communication, focusing on forming a common understanding of technology concepts and their governance principles.
在非传统安全领域,两国可以争取就人工智能治理基本原则形成共有规范。尽管中美在人工智能技术治理原则的表述上存在差异,但双方不少核心理念的内涵却有相通之处,都强调人类对机器的控制,都反对人工智能产品研发和应用对不同或特定民族、宗教、信仰的歧视,也都支持人工智能发展要以增进人类福祉为目标。那么,中美是否有可能就这些治理原则讨论出一个共同文本呢?2023年10月18日,中国政府发出《全球人工智能治理倡议》,共包括11项内容,在一定程度上反映了包括中美在内世界主要国家对人工智能发展的核心关切,提出了构建全球治理体系的科学方案,为未来中美探索建构“人工智能治理准则”提供了可参考的基本框架。考虑到两国已经在人工智能技术标准、法律法规、国际治理方面各自颁布多项文件,双方可基于它们开展沟通,着重就技术概念及其治理原则形成共同理解。
On the other hand, both China and the U.S. face social governance challenges triggered by the infiltration of AI into human life. AI not only magnifies challenges in protecting personal privacy, maintaining social equity, bridging the digital divide, and preventing terrorism but also adds structural pressure to both countries in addressing these social governance challenges. Faced with these non-traditional security challenges, China and the U.S. have a wide range of cooperative space in some functional areas, including: sharing their respective domestic AI governance practices; communicating and exchanging identification methods and standards for AI-generated videos and texts; jointly combating criminal and terrorist activities to prevent irresponsible technology applications from disrupting political stability and public safety; avoiding the misuse of export control tools to impact the stability of the global AI industry chain and supply chain, and so on.
另一方面,中美都面临人工智能渗透进人类生活后所引发的社会治理难题。人工智能不仅放大了保护个人隐私、维护社会公平、弥合数字鸿沟、防范恐怖主义等方面的挑战,还给两国应对这些社会治理难题增添了结构性压力。面对这些非传统安全挑战,中美在某些功能性领域存在广泛的合作空间,包括:分享各自国内人工智能治理的实践经验;就人工智能生成视频文字的鉴别方法和标准开展沟通交流;联合打击犯罪、恐怖主义活动, 以防范不负责任的技术应用破坏政治稳定和公共安全;避免滥用出口管制工具冲击全球人工智能产业链供应链稳定,等等。
In summary, there are many topics that can be discussed in the China-U.S. AI dialogue, which will have a significant leading role in global governance, and should be implemented as soon as possible, striving to become a pioneering project to put the “San Francisco Vision” into practice.
总之,中美人工智能对话可讨论的话题很多,对全球治理的引领作用很大,应尽早落地,争取成为践行“旧金山愿景”的先锋项目。