WASHINGTON — Following a highly-publicized meeting between US President Joe Biden and Chinese President Xi Jjinping, Biden announced three key points of agreement. While much of the focus is going to be on the two sides restarting mil-to-mil communications and counternarcotics cooperation (specifically against fentanyl), the third initiative stood out as new item on the US-China agenda: artificial intelligence.
“Thirdly,” the president said at a press conference after the summit, “we’re gonna get our experts together to discuss risk and safety issues associated with artificial intelligence. As many of you [know] who travel with me around the world almost everywhere I go, every major leader wants to talk about the impact of artificial intelligence. These are tangible steps in the right direction to determine what’s useful and what’s not useful, but dangerous and what’s acceptable.”
With the press conference focused on fentanyl, Taiwan, and Gaza, Biden didn’t elaborate on the AI plan. But the administration has not only made AI a signature issue with a sweeping executive order on the subject, it’s also been pushing hard for global norms on military use of AI in particular.
What’s more, the Chinese have been showing signs they are receptive, particularly when it comes to renouncing AI command-and-control systems for nuclear weapons. While the tie between AI and nuclear weapons was not expressly outlined by either Biden’s comments or a readout from the White House, experts told Breaking Defense ahead of the summit that it could prove to be a key point of agreement between Washington and the country that the Pentagon has termed America’s “pacing” challenge.
Given ongoing tensions, “any agreement I think that we’re going to reach with the Chinese really is gonna be low-hanging fruit,” said Joe Wang, a former State and NSC staffer now with the Special Competitive Studies Project. What experts call “nuclear C2” is an ideal candidate, he and other experts told Breaking Defense in interviews ahead of Biden’s announcement. “Nobody wants to see AI controlled nuclear weapons, right? Like, even the craziest dictator can probably agree.”
“China has signaled interest in joining discussions on setting rules and norms for AI, and we should welcome that,” said Bonnie Glaser, head of the Indo-Pacific program at the German Marshall Fund. And that interest is reciprocated, she told Breaking Defense before the summit: “The White House is interested in engaging China on limiting the role of AI in command and control of nuclear weapons.”
Bans Or Norms?
Hopes for some kind of joint statement had been high since Saturday, when the South China Morning Post, citing unidentified sources, declared that “Presidents Joe Biden and Xi Jinping are poised to pledge a ban on the use of artificial intelligence in autonomous weaponry, such as drones, and in the control and deployment of nuclear warheads.” (While the SCMP is a venerable independent paper in Hong Kong, it’s become increasingly supportive of the Beijing regime.)
The word “ban” raised experts’ eyebrows, because there’s no sign that either China or the US would accept a binding restriction on their freedom of action in AI. (Indeed, US law arguably prevents the President from making such a commitment without Congress). Other reports suggested simply that “China is seeking an expanded dialogue on artificial intelligence,” while the US Ambassador to APEC, Matt Murray, said in a pre-summit press briefing he didn’t expect an “agreement” on AI.
“The two sides may not be there yet in terms of reaching a formal agreement on AI,” said Tong Zhao, a scholar at the Carnegie Endowment, in an email to Breaking Defense ahead of Biden’s press conference.
RELATED: AI ethics: Pentagon about to roll out public web app to guide ‘responsible’ development
This isn’t just a US-China question. Over the past nine months, Washington has been building momentum towards voluntary international norms on the military use of AI — not just autonomous weapons like drones, but also applications ranging from intelligence analysis algorithms to logistics software. This approach tries to fend off calls by many peace activists and nonaligned nations for a binding ban on “killer robots,” instead leaving room for the US and its allies to explore “responsible” use of a widely applicable and rapidly evolving technology.
The American one-two punch came in February. Early that month, the Pentagon rolled out an extensive overhaul of its policy on military AI and autonomous systems. The next week in the Hague, the State Department’s ambassador-at-large for arms control, Bonnie Jenkins, unveiled a “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy” [PDF] that generalized that US approach for international adoption. Since February, 45 other countries have joined the US in endorsing the Political Declaration, from core allies like Australia, Britain, France, Germany, and South Korea to geopolitical problem children like Hungary, Libya, and Turkey.
China, unsurprisingly, has not signed on to the US-led approach. “Its diplomatic strategy is still focused on rivaling and counterbalancing US efforts to set future AI governance standards, especially in the military sphere,” said Zhao. “In managing new military technologies, China frequently resists endorsing ‘responsible’ practices, contending that ‘responsibility’ is a politically charged concept lacking objective clarity.”
In diplomacy, however, a degree of ambiguity can be a feature, not a bug. “The Political Declaration … it’s not binding and so I think it gives us some wiggle room,” said Wang.
However, that kind of “wiggle room” — which allows for a wide range of automated weapons — is exactly what activists are seeking a binding ban to prevent.
“We obviously would like to see the US now moving towards clear and strong support for legal instruments” restricting “lethal autonomous weapons systems,” said Catherine Connolly, a lead researcher at the international activist group Stop Killer Robots, in an interview with Breaking Defense. “We don’t think that guidelines and political declarations are enough, and the majority of states don’t think they’re enough either.”
Fear of ‘Killer Robots’
Efforts towards a new international law have been stymied by a decade of deadlock in Geneva, where a Group of Government Experts convened by the UN consistently has fallen short of consensus — which the Geneva process requires, giving every state a veto.
So the anti-AI-arms movement went over Geneva’s head to New York, proposing a draft resolution to the UN General Assembly. Instead of calling for an immediate ban — which would have been certain to fail — the resolution [PDF], introduced by Austria, simply “requests the Secretary-General to seek the views of Member States,” industry, academia, and non-government organizations, submit a report, and officially put the issue on the UN agenda.
The resolution passed by a vote of 164 to five, with the US in favor and Russia opposing. Among the eight abstentions: China.
“It’s great that the US has joined the large majority of states in voting yes,” Connolly said. “It’s a little disappointing that China abstained, given that they have previously noted their support for a legal instrument, [but] there were some parts of the resolution that they didn’t agree with in terms of characteristics and definitions.”
Beijing, in fact, tends to use a uniquely narrow definition of “autonomous weapon,” one that only counts systems that, once unleashed, “do not have any human oversight and cannot be terminated” (i.e. shut off). That’s allowed China to claim it backs a ban while actually excluding the vast majority of autonomous systems any military might seek to build.
“China seems hesitant to enhance the United Nations General Assembly’s role in regulating military AI,” Zhao told Breaking Defense. It prefers to work through the Group of Government Experts in Geneva where the requirement for consensus gives Beijing — and Moscow, and Washington, and every other state — a de facto veto.
The US itself has historically preferred the Geneva process and, even as it endorsed the UN resolution, warned that it “should not undercut the GGE,” noted CSIS scholar James Lewis. That “is pretty much what the Russians said” when they voted no, he noted in an email to Breaking Defense. “It’s just we played it better, for once, by going with the crowd.”
“Binding law is not in the cards,” Lewis added, “but If the US can pull in others like the UK, France, and maybe the EU into a comprehensive effort, there can be progress on norms.”
So far, international discussion of the non-binding Political Declaration has actually led Washington to water it down — most notably, by removing a passage renouncing AI control of nuclear weapons.
“The February version included a statement based on a commitment the United States made together with France and the United Kingdom to ‘maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment,’” Jenkins said on Monday. “In our consultations with others, it became clear that, while many welcomed this statement of assurance, the nexus between AI and nuclear weapons is an area that requires much more discussion. We removed this statement so that it did not become a stumbling block for States to endorse.”
That such a provision proved too much for some countries, considering the seminal sci-fi nightmare scenarios made infamous in the 1980s by Terminator and Wargames, shows how hard it is to get any consensus on the issue. An agreement to retain human control over nuclear weapons seems like “the lowest-hanging fruit,” said Wang.
“There can be that kind of alignment of interests…when it comes to nuclear weapons,” Wang said. “When it comes to more conventional weapons, I think that’s where it gets a little murkier.”