WASHINGTON — Delegates from 60 countries met last week outside DC and picked five nations to lead a year-long effort to explore new safety guardrails for military AI and automated systems, administration officials exclusively told Breaking Defense.
“Five Eyes” partner Canada, NATO ally Portugal, Mideast ally Bahrain, and neutral Austria will join the US in gathering international feedback for a second global conference next year, in what representatives from both the Defense and State Departments say represents a vital government-to-government effort to safeguard artificial intelligence.
With AI proliferating to militaries around the planet, from Russian attack drones to American combatant commands, the Biden Administration is making a global push for “Responsible Military Use of Artificial Intelligence and Autonomy.” That’s the title of a formal Political Declaration the US issued 13 months ago at the international REAIM conference in the Hague. Since then, 53 other nations have signed on.
Just last week, representatives from 46 of those governments (counting the US), plus another 14 observer countries that have not officially endorsed the Declaration, met outside DC to discuss how to implement its ten broad principles.
“It’s really important, from both the State and DoD sides, that this is not just a piece of paper,” Madeline Mortelmans, acting assistant secretary of defense for strategy, told Breaking Defense in an exclusive interview after the meeting ended. “It is about state practice and how we build states’ ability to meet those standards that we call committed to.”
That doesn’t mean imposing US standards on other countries with very different strategic cultures, institutions, and levels of technological sophistication, she emphasized. “While the United States is certainly leading in AI, there are many nations that have expertise we can benefit from,” said Mortelmans, whose keynote closed out the conference. “For example, our partners in Ukraine have had unique experience in understanding how AI and autonomy can be applied in conflict.”
“We said it frequently…we don’t have a monopoly on good ideas,” agreed Mallory Stewart, assistant secretary of state for arms control, deterrence, and stability, whose keynote opened the conference. Still, she told Breaking Defense, “having DoD give their over a decade-long experience…has been invaluable.”
So when over 150 representatives from the 60 countries spent two days in discussions and presentations, the agenda drew heavily on the Pentagon’s approach to AI and automation, from the AI ethics principles adopted under then-President Donald Trump to last year’s rollout of an online Responsible AI Toolkit to guide officials. To keep the momentum going until the full group reconvenes next year (at a location yet to be determined), the countries formed three working groups to delve deeper into details of implementation.
Group One: Assurance. The US and Bahrain will co-lead the “assurance” working group, focused on implementing the three most technically complex principles of the Declaration: that AIs and automated systems be built for “explicit, well-defined uses,” with “rigorous testing,” and “appropriate safeguards” against failure or “unintended behavior” — including, if need be, a kill switch so humans can shut it off.
These technical areas, Mortelmans told Breaking Defense, were “where we felt we had particular comparative advantage, unique value to add.”
Even the Declaration’s call for clearly defining an automated system’s mission “sounds very basic” in theory but is easy to botch in practice, Stewart said. Look at lawyers fined for using ChatGPT to generate superficially plausible legal briefs that cite made-up cases, she said, or her own kids trying and failing to use ChatGPT to do their homework. “And this is a non-military context!” she emphasized. “The risks in a military context are catastrophic.”
Group Two: Accountability. While the US applies its immense technical expertise to the problem, other countries will focus on personnel and institutional aspects of safeguarding AI. Canada and Portgual will co-lead work on “accountability,” focused on the human dimension: ensuring military personnel are properly trained to understand “the capabilities and limitations” of the technology, that they have “transparent and auditable” documentation explaining how the it works, and they “exercise appropriate care.”
Group Three: Oversight. Meanwhile, Austria (without a co-lead, at least for now) will head the working group on “oversight,” looking at big-picture policy issues such as requiring legal reviews on compliance with international humanitarian law, oversight by senior officials, and elimination of “unintended bias.”
RELATED: Avoiding accidental armageddon: Report urges new safety rules for unmanned systems
Real World Implementation
What might implementation of these abstract principles mean in practice? Perhaps something like the Pentagon’s online Responsible AI Toolkit, part of a push by DoD’s Chief Digital & AI Officer (CDAO) to develop publicly available and even open-source tools to implement AI safety and ethics.
Stewart highlighted that CDAO’s Matthew Kuan Johnson, the chief architect of the toolkit, gave an “amazing” presentation during the international conference: “It was really, really useful to have him walk through the toolkit and answer questions.”
Speaking days after the conference ended, at a Potomac Officers Club panel on AI, Johnson said “we got incredibly positive feedback … Allied nations [were] saying they thought this was a really positive development, that there’s such a push to open-source and share so much of the materials and best practices.”
“There is really significant momentum and appetite,” Johnson told the panel. “How do we get from these kind of high-level principles down to that implementation…processes, benchmarks, test, evaluation, metrics, so you can actually demonstrate how you are following the principles and implementing them.”
Johnson certainly came away enthusiastic. “It is a really exciting time for responsible AI in the international space,” he said, “with the Political Declaration, with the Partnership for Defense that CDAO has, with the second REAIM summit happening in Korea in September.”
That’s just on the military side. The Biden administration issued a sweeping Executive Order on federal use of AI in October, joined the UK-led Bletchley Declaration on AI safety writ large in November and, just last week, got the UN General Assembly to pass a US-led resolution by unanimous consent that called for “safe, secure, and trustworthy” AI for sustainable development.
But the administration also tries to keep the civilian and military discussions distinct. That’s partly because military AI is more controversial, with many activists calling for a binding legal ban on “lethal autonomous weapons systems” that the US, its allies, and adversaries like Russia and China all would like some leeway to develop.
“We made a purposeful choice, in pursuing a consensus-based UN resolution, to not include the military uses discussion,” a senior Administration official told reporters at a briefing ahead of last week’s General Assembly vote. “There are ample places to have that conversation [elsewhere], including in the UN system…. We have an intensive set of diplomatic engagements around the responsible military uses of artificial intelligence.”
The two tracks are meant to be parallel but complementary. “We’re really happy the UNGA was able to take a step in the non-military arena,” Stewart told Breaking Defense. “[There’s] the potential for advantageous and synergistic cross-pollination.”
But, she said, the world still needs distinct fora for different kinds of people to discuss different aspects of AI. The United Nations brings together all countries on all issues. Military AI conferences like REAIM include activists and other non-government groups. But the value of the Political Declaration and its implementation process is that it’s all about governments talking to other governments, specifically about military applications, and behind closed doors.
“The Political Declaration looks at this from a government to government perspective,” Stewart said. “We’re really focusing on an environment in which governments can discuss the challenges that they’re experiencing, the questions that they have, and… address the practical, concrete, and and really effective and efficient implementation.”
TAI exec claims 20 Turkish KAAN fighters to be delivered in 2028
Temel Kotil, TAI’s general manager, claimed that the domestically-produced Turkish jet will outperform the F-35 Joint Strike Fighter.