You are here

General document | Adversarial Machine Learning for Cyber Security: NGTF Project Scoping Study

Abstract

This report is the result of a scoping study undertaken as part of an Australian Department of Defence Next Generation Technologies Fund (NGTF) project entitled Adversarial Machine Learning for Cyber-Security (AMLC). The report describes the broader context for the project (e.g. attacks and defences against machine learners), outlines general concepts and techniques for adversarial machine learning, and focusses on reinforcement machine learning algorithms of specific relevance to Defence. A software simulation platform will also be developed to demonstrate the effectiveness of the attacks against, and defences of, such machine learning algorithms in a cyber-security context.

Executive Summary

Impactful applications of machine learning (ML) in Defence abound, from cyber-security (e.g. network security operations, malware analysis) to machine reasoning and autonomous systems (e.g. decisionmaking and platform control systems, computer vision, speech recognition, speaker identification etc.). But despite the many successes, the very property that makes machine learning desirable— adaptability—is a vulnerability able to be exploited by an economic competitor or state-sponsored attacker that could potentially result in the severe degradation of the integrity, security and performance of Defence systems. Attackers aware of the ML techniques being deployed against them can, for example, contaminate the training data to manipulate a learned ML classifier in order to evade subsequent classification, or can manipulate the specific metadata upon which the ML algorithms make their decisions and exploit identified weaknesses in these algorithms so called Adversarial Machine Learning (AML). The resilience of learning algorithms is thus a critical component for trustworthy systems in Defence, National Security and society more broadly, but one that is so far poorly understood.

This report is a scoping study undertaken in the context of an Australian Department of Defence Next Generation Technologies Fund (NGTF) project entitled Adversarial Machine Learning for Cybersecurity (AMLC)y. The project will investigate concepts, techniques and technologies relating to the security of machine learning algorithms and machine learning systems. The project will focus on a specific class of ML algorithms, namely, reinforcement learning (RL) algorithms that are used in an increasing number of Defence-relevant application settings such as decision-making systems, and in cyber-security settings such autonomous cyber-security operations (ACO), malware detection etc.

The NGTF AMLC project scoping study aims to:

  1. identify novel, deep research problems in adversarial machine learning for cyber-security
  2. ground the research programme design in the existing AML and cyber-security literature, so that future work builds on a firm foundation of existing knowledge
  3. develop relevant AML techniques and technology solutions
  4. ground the AMLC research in practical problems of interest to DST Group and its Defence and National Security clients, through a flexible demonstration platform
  5. detail a plan of work packages, to ensure appropriate long-term project outcomes.

In particular, the AMLC project seeks to deliver: 

  • one or more practical platforms for demonstrating the various adversarial machine learning capabilities 
  • actionable knowledge of how sophisticated adversaries can exert unwanted influence on machine learning systems 
  • new robust machine learners built for adversarial environments.

The AMLC research team consists of members from Cyber and Electronic Warfare Division (CEWD) DST Group, Data61, the University of Melbourne, and Swinburne University.

Key information

Author

AMLC Team at UniMelb, Data61 and Swinburne Univ. Tamas Abraham , Olivier de Vel, Paul Montague

Publication number

DST-Group-GD-0988

Publication type

General document

Publish Date

January 2018

Classification

Unclassified - public release

Keywords

Adversarial machine learning, software-defined network, cyber-security, autonomy