IEEE SA Industry Connections Research Group Launches New Framework on Autonomous and Intelligent Systems in Defense Applications
The IEEE Standards Association (IEEE SA) Industry Connections Research Group on Issues of Autonomy and AI in Defense Systems has developed a new framework addressing stakeholders who are involved in decision-making processes about autonomous and intelligent systems (AIS) in defense applications.
Industry Connections (IC) is an IEEE SA program that helps to incubate new ideas for standards by facilitating independent groups of interested stakeholders to collaborate on refining their thinking about rapidly changing technologies. The document – “A Framework for Human Decision Making through the Lifecycle of Autonomous and Intelligent Systems in Defense Applications” – is intended to support all stakeholders, including decision makers and those involved in strategic operational and tactical considerations, to apply existing broad sets of standards and ethical principles.
“The framework is the outcome of intense efforts by an independent group comprising volunteers drawn from a wide range of countries with expertise in multiple domains,” said Anja Kaspersen, IEEE SA Director for Global Markets Development, who provided oversight and guidance to the group.
The potential impact of embedding AI features in defense systems has become a highly debated issue over the last decade, with many unknowns,” said Kaspersen. “The Research Group, formed in 2021, aims to share insights and build collective intelligence. It stems from a long-standing IEEE SA initiative on ethical considerations in AI and autonomous systems. In 2016, IEEE SA engaged global stakeholders to develop technical standards, drawing on its expertise in socio-technical system governance and its commitment to advancing technology for the benefit of humanity.”
“The Research Group set itself the goal of considering the full lifecycle of AI systems in defense contexts—from development to decommissioning,” said Ingvild Bode, Chair of the Research Group. “The resulting Framework offers detailed insights into stakeholder involvement at each stage, outlining when and how challenges should be addressed and who should be responsible and accountable.”
“The Framework highlights that there is no such thing as a completely autonomous defense system. Humans are always involved somewhere along the way. Human decision-makers need to comprehensively assess whether it is appropriate or ethical to use any given system for a defense application. It may not necessarily be appropriate, whether because the system is immature, technical safeguards are lacking, or humans have insufficient knowledge to operate the system safely.”
“The Research Group’s discussions made clear that experts from different domains can have different understandings of terminology and capabilities. This experience showed how bringing together diverse experts – including technical people from civilian sectors, experts in legal and ethical issues, policymakers, and military leaders – is critical to build common understanding,” said Ariel Conn, a member of the Research Group.
The published Framework includes important considerations for decision-makers who need to weigh the potential benefits, risks, and harms of developing and deploying a system; potential blind spots; effects on and gaps in human skills; technical or scientific shortfalls; and issues such as supply chain reliability.
The Framework was developed with the growing awareness about emergent capabilities that could enable a system to identify, select, and apply force to targets without human intervention; and with the understanding that even though some other military applications of AIS – such as assistance with navigation and logistics – may be less controversial, they still require thoughtful engagement.
The Research Group approached the task of developing the framework by first creating a fictional example scenario. “The process of developing the scenario showed how individual steps in design and development may not seem problematic, but can trigger cascading effects,” says Rachel Azafrani, Vice-Chair of the Research Group.
“The capabilities and limitations of a system must always be understood in the context of other technical components or human personnel it may interact with for any given task,” added Ingvild Bode.
“With the growing proliferation of autonomous systems expected over the next few years, to include AI-enabled autonomy, human oversight will be as important than ever,” added U.S. Air Force retired Lieutenant General Jack Shanahan, a member of the Research Group. “It now becomes a matter of where that oversight occurs, and how it’s implemented. That’s what the lifecycle Framework is designed to address, beginning with design and development.”
The Framework’s focus throughout is on the humans involved in decisions. It is designed to help identify the relevant decision-makers at each stage: their role, the knowledge and documentation they need from previous decision-makers, and the types of questions and issues to be addressed. Although it is focused on defense applications and use cases, it can also guide decision-makers involved in embedding AIS in critical functions more generally.
“Nothing in this framework should be taken as precluding that new regulations, standards, policies, or laws may need to be developed to address concerns about AIS in defense contexts,” added Ingvild Bode, Chair of the Research Group. “We hope this Framework will play an important role and be seen as a valuable tool to advance discussions on this topic.”
