You are here

Technical report | The Operationalisation of Agent Transparency and Evidence for Its Impact on Key Human-Autonomy Teaming Variables

Abstract

The growing interest in the use of autonomous systems for both military and commercial applications has been accompanied by a concomitant increase in research involving human-agent interaction. Transparency has been investigated as one factor that could improve human trust in, and appropriate reliance on, autonomous systems. This report provides a review of studies that have examined how the transparency of an autonomous system affects key variables such as operator performance, response time, situation awareness, perceived usability, and subjective workload. Theoretical frameworks that support transparency in autonomous systems including Lyons' models of transparency (2013) and the Situation Awareness-Based Agent Transparency (SAT) model (Chen et al., 2014) are also presented. Findings from these studies indicate that a certain amount of transparency seemed to improve operator performance, however too much transparency information could also decrease operator performance. Overall, the findings have not been clear-cut in terms of how much and what type of transparency information should be communicated to the operator. Future research should also examine the underlying mechanisms responsible for these transparency effects.

Executive Summary

The growing interest in the use of autonomous systems for both military and commercial applications has been accompanied by a concomitant increase in research involving human-agent interaction. Transparency has been identified as one factor that could improve human trust in, and appropriate reliance on, autonomous systems (Hancock et al., 2011). This report examines how transparency has been operationalised in the literature, and reviews evidence of the impact of transparency on key human-autonomy teaming in order to guide future research.

Transparency refers to an operator’s awareness of an autonomous agent’s actions, decisions, behaviour, and intention (e.g., Chen et al., 2014). Theoretical frameworks such as Lyons’ models of transparency (2013) and the Situation Awareness-Based Agent Transparency (SAT) model (Chen et al., 2014) have been proposed to support transparency, respectively, in human-robot interaction and human-agent teaming. Specifically, these models provide guidance on what information should be communicated to the human to support the interaction between the human and the autonomous system.

There have been five studies that have manipulated different levels of transparency and investigated its effects on variables such as operator performance, response time, subjective workload, situation awareness, trust, and usability of the system (i.e., Mercado et al., 2015/2016; Selkowitz, Lakhmani, Larios, & Chen, 2016; Stowers et al., 2016; Wright, Chen, Barnes, & Boyce, 2015; Wright, Chen, Barnes, & Hancock, 2016a, 2016b). In general, these studies found that transparency information imparting information about an agent’s reasoning improved operator performance; however, some studies found that additional transparency information actually worsened operator performance (Wright et al., 2015, 2016b). Transparency did not seem to affect operator response time and subjective workload (Mercado et al., 2015/2016; Selkowitz et al., 2016; Wright et al., 2015, 2016a, 2016b). Out of these five studies, only one study included a measure of situation awareness and found that the additional transparency information (i.e., predicted outcomes and agent’s reasoning) improved operator situation awareness, but not when uncertainty information was also included (Selkowitz et al., 2016). These findings indicate that providing too much transparency information may overwhelm the operator. In terms of subjective trust and perceived usability, the results have been inconsistent. For example, Mercado et al. (2015/2016) found that subjective trust increased only when uncertainty information was given; however, Selkowitz et al. (2016) found that agent’s reasoning, but not the additional uncertainty information, increased subjective trust. Finally, while Mercado et al. found that perceived usability scores increased when an agent’s reasoning and uncertainty information were given, Stowers et al. (2016) found that the addition of uncertainty information actually lowered participants’ scores on perceived usability.

In summary, although operator performance, situation awareness, perceived usability, and trust seemed to be affected by agent transparency, the results from past studies have not been clear-cut in terms of how much and what type of information should be included and communicated to the operator. Some of these studies suggest that while higher levels of transparency may improve some of the human-autonomy teaming variables, the highest transparency level did not always produce the best outcome. Future research should further investigate this issue (i.e., to specify the type of information that would be beneficial for the operator in a given context), and further examine the underlying mechanisms that could explain these transparency effects. Understanding these psychological processes is fundamental to designing an interface that supports transparency in human-autonomy teaming.

Key information

Author

Adella Bhaskara

Publication number

DST-Group-TR-3413

Publication type

Technical report

Publish Date

October 2017

Classification

Unclassified - public release

Keywords

Automation, Autonomous Agents, Human-Machine Interaction