“I think that there’s a recognition that the sorts of norms we’re trying to promote are things that all countries should be able to get behind,” Pentagon emerging capabilities official Michael Horowitz said.
By Jaspreet GillOver the last 11 months, the US has made major progress in defining “Responsible Military Use of Artificial Intelligence” and even getting other nations to sign on the idea — without ever actually precluding the kind of automated “killer robots” activists want to ban.
By Sydney J. Freedberg Jr.Training artificial intelligence on military-specific doctrine and intelligence to come up with operational plans is an “active area of experimentation right now,” according to a Special Competitive Studies Project analyst.
By Sydney J. Freedberg Jr.Last week, 33 nations called for a global treaty restricting “lethal autonomous weapons.” But US officials warn that such a ban would be both premature and overly narrow, preferring broader but non-binding “best practices” guiding any military employment of AI.
By Sydney J. Freedberg Jr.The revised DoD Directive 3000.09 refines an obscure review process, adding broad AI ethics principles but still not actually forbidding development or deployment of would-be killer robots.
By Sydney J. Freedberg Jr.“I think one of… the things we sought to accomplish in the course of the update is clarifying the language to ensure a common understanding both inside and outside the Pentagon of what the directive says,” said Michael Horowitz, director of the Pentagon’s Emerging Capabilities Policy Office.
By Jaspreet Gill“We want to make sure, of course, that the directive still reflects the views of the department and the way the department should be thinking about [autonomous] weapon systems,” Michael Horowitz told Breaking Defense in an exclusive interview.
By Valerie Insinna and Aaron Mehta