You are here

Ethical AI within Defence – is it a Goldilocks problem?

23 June 2022
News

Artificial intelligence (AI) is already one of the biggest catalysts of change in our society. From self-driving cars and autonomous delivery drones, to digital assistants and online streaming platforms, AI is changing how people live, work and experience the world.

It is a powerful and disruptive technology that has numerous applications, including many that we have not dreamt of yet. While AI can produce many benefits for people and the world, it also has the potential to cause damage, harm or worse.

The Australian Government and Defence recognise the importance in understanding and shaping the use of AI. In the past three years, numerous governmental documents have discussed AI. In 2019, the government issued the AI Ethics Framework, which established guidelines for assessing the ethical risks of civilian AI systems. In the 2020 Defence Strategic Update and Defence's 2020 Force Structure Plan, AI was considered a strategic focus. And in 2021, Defence released A Method for Ethical AI in Defence, a research publication that proposed a practical method to assess and manage the ethical risks of AI within a defence context.

Building upon this work, Defence researchers, in collaboration with Department of Prime Minister and Cabinet, and the Australian National University have, for the first time, applied A Method for Ethical AI in Defence to the type of AI system Defence might operate in the near future. The researchers decided to base their envisioned tactical command and control system on Allied IMPACT, a command and control research testbed developed by Australia, Canada, USA and the UK to demonstrate the potential for a single human user to control multiple uninhabited vehicles with the assistance of various modules powered by AI. In an earlier trial of Allied IMPACT, one person was able to control 17 uninhabited vehicles. Since each uninhabited vehicle is typically controlled by 4 people, that means one human user, supported by AI, could replace around 86 personnel. This could dramatically change how military personnel are used, freeing them up for other activities, which could be extremely valuable for Defence.

As part of their work in this case study, the researchers reviewed A Method for Ethical AI in Defence documentation, completed multiple interviews with defence subject matter experts, watched demonstrations of the technology, and explored and evaluated possible scenarios in which use of the envisioned tactical command and control system might generate ethical risk.

The Defence researchers identified several recommendations and many avenues for further research from applying A Method for Ethical AI in Defence to Allied IMPACT and the envisioned future system. One theme was that AI systems should prioritise assisting users over raw efficiency. At their most basic level, AI systems such as Allied IMPACT are tools designed to assist human users. However, often maximising efficiency is not the best way to maximise the benefits (and minimise harm) arising from the way a human will use the tool.

'The most technically advanced solution might not actually be the best solution,' said Dianna Gaetjens, one of the Defence researchers who completed this study. When designing these tools, 'you need to think very carefully about how to best assist human decision-makers.' A key way to ensure this is for AI system developers to consider how humans will use the system from the very start of the development process. 'A common trap is solving a technical problem but not considering the context within which people will be using the technology,' said Dianna. 'You need to have all the people involved in developing an AI system thinking about the socio-technical context right from the word go, because it's very hard to retrofit that later.'

Human users will also need to be able to trust and understand the capabilities of each AI system. As Dianna says, 'where AIs make decisions or recommendations, humans users often don't know what information has been used to reach that conclusion or how the decision has been made. But the human is still making decisions based on what the AI tells them – and that potentially puts them at risk.' That's why the report recommends that users of AI receive appropriate training and education to equip them to use the technology in a way that is a bit like Goldilocks – trusting it neither too little nor too much, but just the right amount. This issue is not restricted just to a military context. 'It's also about, for example, HR systems. Some companies already use AI to shortlist candidates. Defence might move towards that in future. How do you make sure that that technology is actually effective and being used responsibly and not leading to poor outcomes?' 'AI is very good at certain things that humans are not as good at. And taking advantage of that is obviously desirable. But it's wrong to think that AI is like a human; that the decisions that are coming out of an AI agent are like a human decision. So we need to ensure that we use AI in a way that minimises the moral, legal and psychological risk to ADF personnel, while also allowing us to uphold our standards and values. Achieving this is the crux of responsible AI and it's a fascinating but complex and difficult problem that will take years to unravel.' It is for this reason that this case study recommends that Defence ensures that there is a suitable accountability framework within Defence to clarify when and how human users and managers are held to account to decisions made by or with AI support.

This case study also provides recommendations on how to responsibly manage the data used to underpin AI systems, and suggestions to improve the resources within A Method for Ethical AI in Defence. This research represents an important first step in applying A Method for Ethical AI in Defence to a Defence AI system, and the lessons learnt could be applied to across government. This research is also part of a much bigger collection of research being undertaken within Defence on AI, command and control, and other scientific and technical activities to acquire, sustain, and future-proof Defence capabilities.
For more information, read Case Study: A Method for Ethical AI in Defence Applied to an Envisioned Tactical Command and Control System or A Method for Ethical AI in Defence.