You are here

Where next for autonomy research?

14 October 2020
News
Australian soldiers are exploring the use of robotic and autonomous systems to enhance Army capabilities.
Australian soldiers are exploring the use of robotic and autonomous systems to enhance Army capabilities.

As the prevalence of automation enabled by artificial intelligence (AI) has grown, our familiarity with and reliance on autonomy in our day-to-day lives has increased. Robotic and autonomous systems (RAS) also have an important role to play within the Australian Defence Force (ADF).

The Australian Army released its RAS strategy in 2018; the Royal Australian Air Force has sought to accelerate the use of AI and autonomy through Plan Jericho, and the Royal Australian Navy is pursuing multiple acquisition programs that rely on autonomy.

Autonomy, by definition, is the ability to self-govern. It endows a robotic platform with the smarts to be more than an automated enabler; it gives a robot the ability to become a teammate and work with human operators and other robotic systems. Achieving such a capability would be incredibly beneficial, but its realisation remains elusive.

“The promise of autonomous systems has been discussed for decades, but we are yet to see true autonomy fielded in the ADF,” says Defence scientist Dr Robert Hunjet.

“Teleoperation, the concept of a remote-controlled vehicle, drone or tank, is not representative of autonomy as the vehicle has no ability to make its own decisions or task itself.”

So what is wrong with the autonomous systems that are available today? They aren’t smart enough, they aren’t robust, and they aren’t trusted by operators.

Get smart

A truly smart system should be able to observe and make sense of its environment, and work by itself or with others to achieve goals. To address this, Defence is undertaking research in areas including contextual awareness, active perception, path planning, multi-agent system control and swarming.

Improving a robot’s ability to work intelligently requires more than investment in machine learning. It is also about enabling systems to work together.

“With recent advances in drone technology, the concept of swarming has attracted a lot of interest,” says Dr Hunjet.

“We observe swarming in nature, for example in the way birds flock and fish school. The individuals interact only with others in close proximity, and the cascading effect of these local interactions provides a global behaviour through the swarm as a whole.

“Within robotics, we can emulate the creation of global behaviours in a similar fashion through local interactions with neighbouring systems, offering a potential scalable approach to generate mass with minimal computational complexity or communications overheads.”

Building resilience

In order to be considered robust, autonomous systems must be able to operate in difficult or contested situations. Algorithms must be stable in the face of unexpected system inputs.

Although this problem is often discussed in terms of image classification and perception, it is far broader than this. Most robotic applications rely on accurate positioning information, so GPS-contested environments, for example, present a significant challenge.

This might be addressed by the use of collaborative positioning. Defence is investigating approaches that would allow robotic systems to share their position and orientation information with others that would then fuse these estimates with their own data, enabling enhanced positioning accuracy throughout the fleet.

Trust but verify

Building trust in autonomous systems will be critical to the technology’s uptake. So how will humans learn to trust and partner with a machine as they would with their human counterparts?

“Interaction between entities no doubt plays a large part in human trust,” observes Dr Hunjet. “As such, the interface between a human operator and a machine should be designed to assist the human and reduce cognitive load.”

Ongoing research is aiming to address how AI might be able to explain its decisions to a human operator in a manner that takes into account the operator’s state. That is, the machine would seek to provide an appropriate level of detail based on its understanding of the operator’s current cognitive load.

Trust is also gained through observation of repeated good performance. To ensure that its technology works effectively and as expected, Defence conducts research into verifiable autonomy.

The concept of verification from the perspective of test and evaluation is also something to consider. With many AI-based systems being specifically designed to learn and evolve, they do not necessarily behave in the same manner when presented with the identical inputs, such as sensor information. In such systems, traditional regression-based approaches to testing are not appropriate.

Future testing processes may need to be more akin to the issuance of a driver’s licence, where a curriculum is designed and competency assessed, allowing for future improvement while performing a task. This concept is known as ‘machine education’.

Partnering for impact

Collaboration is at the heart of Defence’s pursuit of autonomy for future robotic platforms. Defence funds collaboration with Australian academic institutions and international partner organisations through its trusted autonomous systems strategic research initiative.

Defence played a key role in the creation of the Defence Cooperative Research Centre for Trusted Autonomous Systems, and continues to support its efforts to transition autonomy research to industry.

Scientists from Defence are seeking to work with the wider Australian science and technology enterprise to develop autonomous systems that are smart, robust and trusted, allowing them to be deployed within the future ADF.

Autonomous Systems is one of the Defence Focus Areas listed for discussion at the 2020 Defence Human Sciences Symposium (DHSS).