You are here

Technical report | Case Study: A Method for Ethical AI in Defence Applied to an Envisioned Tactical Command and Control System

Executive Summary

The use of artificial intelligence (AI) in a defence context poses significant ethical questions and risks. Defence will need to address these as AI systems are developed and deployed in order to maintain the reputation of the ADF, uphold Australia's domestic and international legal obligations, and support the development of an international AI regime based on liberal democratic values.

This report, Case Study: A Method for Ethical AI in Defence Applied to an Envisioned Tactical Command and Control System, is the product of a scientific and technical (S&T) collaboration between the Department of the Prime Minister and Cabinet (PM&C), Defence and the 3A Institute at the Australian National University (ANU). It uses A Method for Ethical AI in Defence [1] to explore the ethical risks in an envisioned AI-enabled tactical command and control (C2) system that integrates a variety of autonomous functions in order to assist a single human operator in managing multiple uninhabited vehicles simultaneously.

The analysis of this envisioned C2 system using A Method for Ethical AI in Defence generated key findings for three stakeholder group: whole-of-Defence; AI technology developers, and those seeking to use or iterate A Method for Ethical AI in Defence.

For Defence, the report identifies critical policy gaps and recommends action on:

  • an accountability framework for decisions made by and with AI
  • education and training of operators, command and systems developers
  • managing the data underpinning many AI applications, including its collection, transformation, storage and use.

Without action, these gaps leave Defence vulnerable to significant reputational and operational damage.
Additional key findings for AI technology developers relate to the topics of effectiveness; integration; authority pathway; confidence; and resilience. In aggregate, these findings encourage developers to consider whether the most efficient system or algorithm (for example, in terms of speed or accuracy), is necessarily the best in terms of providing assistance to the decision-maker. In some cases, a less efficient algorithm that is more consistent with normative decision-making may be more appropriate. In addition, there is a clear need for research on what information is necessary to make good judgements (especially where issues are complex and context is important); and how it should be rapidly conveyed. These key findings can be explored further through consideration of seven hypothetical ethical risk scenarios developed as part of the analysis.

For those seeking to apply or iterate A Method for Ethical AI in Defence, the report recommends the development of additional tools to assist practitioners in identifying areas of maximum relevance and utility for their specific needs; and a comprehensive set of definitions to assist in applying the method.

Key information

Author

Dianna Gaetjens, Kate Devitt and Christopher Shanahan

Publication number

DSTG-TR-3847

Publication type

Technical report

Publish Date

July 2021

Classification

Unclassified - public release

Keywords

Artificial Intelligence, Ethics, Command and Control, Intelligent Agents