You are here

Technical report | A Method for Ethical AI in Defence

Recent developments in artificial intelligence (AI) have highlighted the significant potential of the technology to increase Defence capability including improving performance, removing humans from high-threat environments, reducing capability costs and achieving asymmetric advantage. However, significant work is required to ensure that introducing the technology does not result in adverse outcomes. Defence's challenge is that failure to adopt AI in a timely manner may result in a military disadvantage, while premature adoption without sufficient research and analysis may result in inadvertent harms.
To explore how to achieve ethical AI in Defence, a workshop was held in Canberra from 30 July to 1 August 2019 with 104 people from 45 organisations in attendance, including representatives from Defence, other Australian government agencies, the Trusted Autonomous Systems Defence Cooperative Research Centre (TASDCRC), civil society, universities and Defence industry.

The workshop was designed to elicit evidence-based hypotheses regarding ethical AI from a diverse range of perspectives and contexts and produce pragmatic methods to manage ethical risks on AI projects in Defence.

20 topics emerged from the workshop including: education command, effectiveness, integration, transparency, human factors, scope, confidence, resilience, sovereign capability, safety, supply chain, test and evaluation, misuse and risks, authority pathway, data subjects, protected symbols and surrender, de-escalation, explainability and accountability.

These topics were categorised into five facets of ethical AI:

  • Responsibility – who is responsible for AI?
  • Governance – how is AI controlled?
  • Trust – how can AI be trusted?
  • Law – how can AI be used lawfully?
  • Traceability – How are the actions of AI recorded?

A further outcome of the workshop was the development of a practical methodology that could support AI project managers and teams to manage ethical risks. This methodology includes three tools: an Ethical AI for Defence Checklist, Ethical AI Risk Matrix and a Legal and Ethical Assurance Program Plan (LEAPP).

It is important to note that the facets, topics and methods developed are evidence-based results of a single workshop only, rather than exhaustive of all ethical AI considerations (there were many more ideas expressed that may be valid under further scrutiny and research). Furthermore, A Method for Ethical AI in Defence does not represent the views of the Australian Government. Additional workshops are recommended and more stakeholders engaged to further explore appropriate frameworks and methods for using AI ethically within Defence.

Key information

Author

Kate Devitt, Michael Gan , Jason Scholz and Robert Bolia

Publication number

DSTG-TR-3786

Publication type

Technical report

Publish Date

February 2021

Classification

Unclassified - public release

Keywords

Ethics, artificial intelligence, artificial intelligence systems, autonomous operations, philosophy