Cyber

Next Generation Technologies Fund - Cyber

Cyber is a priority theme of the Next Generation Technologies Fund (Next Gen Tech Fund), aimed at realising the potential game changing cyber capabilities afforded by research and development in Australia. Defence recognises the need to respond to this technology opportunity, and that technological advances in the cyber domain are likely to lead to the introduction of new capabilities in our region.

Call for Applications

Cyber seeks to leverage the vibrant cyber science, technology and innovation capability across Australia to develop technology solutions of high relevance to Defence. Through partnerships with Data61, academia and industry, Defence aims to understand the potential of cyber technologies, create prototype systems, and demonstrate the practical application of systems to Defence problems. One of the goals of cyber technologies research is to inform Defence of the potential benefits and practical limitations of cyber technologies through studies and demonstrator systems within a three to five-year timeframe.

How you can be involved in the Cyber program

We are seeking submissions from academia, and other research agencies, telling us how you propose to contribute to the research objectives of the priority projects described within this call. In order to be considered for support, submissions should:  

  1. Demonstrate a strong alignment to one (or more) of the Cyber areas of interest identified
  2. Contain clearly articulated milestones across a 12-month project period
  3. Outline how the research contributes to a forward-looking research plan (5+ years); and
  4. Detail the level of financial support required.

Submissions are sought for an initial period of 12 months duration, with funding to be tied to the achievement of agreed deliverables and milestones. Projects may be funded in future stages (i.e. beyond 12 months), subject to successful completion of Stage one deliverables and milestones.

Proposals should detail any in-kind contribution from the participant and any proposed co‑funding arrangements. Current and new collaborations with Defence Science and Technology (DST) and/or Data61 are encouraged.

Cyber Technology Project Descriptions

Trustworthy Machine Learning (High priority)

Machine Learning (ML) and Artificial Intelligence (AI) techniques and technologies continue to develop at a rapid pace and have demonstrated remarkable success across a broad range of application areas. In cyber security in particular, there have been numerous applications of ML, alleviating pressure on the bottleneck caused by limited availability of expert human cyber operators and analysts.

However, despite this ongoing success, there are significant challenges in ensuring trustworthiness of ML systems. Recent work has shown that the use of ML can introduce additional vulnerabilities into a system, which arise from weaknesses in the algorithms themselves (e.g. ML classifiers incorrectly classifying adversarial data with a high degree of confidence), or from the exploitation of weaknesses in the ML system’s goals. Such vulnerabilities inevitably lead to a loss of trust in automated and autonomous ML systems. While security of ML is not a focus of traditional ML algorithm development, when used in domains such as cyber security, there are incentivised, malicious adversaries present in the system willing to game and exploit such ML vulnerabilities. More broadly than such issues of robustness of ML, there are concerns with the correctness of the predictions of ML systems. These issues are of concern not only in cyber security, but in numerous other application areas for ML.

We are interested in improving the robustness and resilience of both ML algorithms and of the entire development pipeline of ML solutions. We are further interested in techniques quantifying their levels of correctness, robustness or resilience and/or considering additional outputs that attest to the correctness of the predictions. We are therefore seeking proposals that enable significant advances in the science of increasing robustness and quantifying and enhancing trustworthiness of ML concepts, techniques and technologies with a particular focus on applicability to cyber security. Consideration must be given in any proposal to the impact on, and possible trade-offs regarding, performance (speed, precision, recall, etc.) of the ML systems from suggested approaches.

Symbolic Execution for Rapid Threat Analysis (High priority)

Automated analysis of software binaries through techniques such as symbolic execution has shown potential to be a game-changing technology for computer security.   However, current work in this area focuses on vulnerability or bug detection/discovery and is computationally expensive (requiring significant computing resources and/or time for analysis).

A promising new application for symbolic execution is for rapid threat analysis to understand the behaviour of unknown software discovered by incident response teams, to identify potentially malicious code and its consequences.  In a military incident response context, knowledge of the effects of malicious software on a wider system is key to determining remedial actions that preserve fight-through.

This requires a new methodology for symbolic and concolic analysis in which a key goal of the process is to accurately model the effect of a binary on the wider system and develop analysis techniques that support this methodology. Importantly, applying symbolic execution as part of incident response requires approaches that can deliver useful results in as little as a few minutes in a computationally efficient manner. This will require new approaches to prioritise the exploration of execution paths during symbolic execution and the development of alternatives or improvements to constraint solvers.

The long-term goal of this research program is to develop symbolic execution techniques that enable portable and practical tools for use in responding to previously unknown cyberattacks as they occur.

Formal verification of Network Control Protocols (High priority)

The research will investigate formal methods to model and verify the dynamic processes of network control protocols to discover potential vulnerabilities. The vulnerabilities include qualitative (e.g. absence of bugs) as well as quantitative properties (e.g. timing and probabilistic analysis) of the protocols.

This project should focus on the formal analysis of the protocol design and specifications rather than vendor-specific implementations. There are numerous network control protocols that can be formally verified. Specific interest lays with protocols in the Access-networks, IP routing and Transports protocols, including Open Shortest Path First (OSPF) and North/South Bound interfaces in SDN such as the OpenFlow protocol, and 4G/5G core mobile telecommunications protocols, such as Diameter.

Data Security and Privacy of Inference Models (High priority)

Advances in Artificial Intelligence and machine learning techniques are enabling extensive capabilities in extracting analytics and insights allowing an unprecedented rise of data-driven economy that finds applications in diverse sectors ranging from smart manufacturing and smart transportation to predictive maintenance and precision healthcare. This however raises several privacy and security concerns.

This research will focus on identifying and mitigating data privacy issues relating to AI and machine learning inference models. In particular, the project aims at identifying and quantifying information leakage happening when inference models process data features and extract insights from personal data. One special area of interest is the understanding of threats related to differentiating attacks where adversaries hold different versions of an AI inference model. This can be illustrated by cloud providers having access to different models of analytics extraction after events of end-users opting out from some analytics services. The project will also explore the intersection between privacy risks and malicious activity pertaining to the use of personal information such as spear phishing.

Detecting and Analysing Vulnerabilities in Concurrent Software (High priority)

Concurrent software is software that runs more than one thread simultaneously. Today, virtually all software applications are concurrent. Toby Murray (2018 IEEE Euro S&P) presented the first general purpose logic for proving information flow control security of shared memory concurrent programs. This lays a foundation for developing provably sound program analysis techniques for detecting and analysing vulnerabilities in concurrent software.

The logic, however, assumes that the software is data-race free and is therefore not affected by code optimisations made by the compiler or the multicore hardware. Furthermore, the logic assumes nondeterministic scheduling of threads by the operating system. These assumptions do not always hold. For example, programs may not be data-race free in order to improve efficiency (so called non-blocking algorithms are often used at the operating-system level) or simply due to poor design. Detecting vulnerabilities in such cases requires tools built on a logic that takes into account compiler and hardware optimisations.

Graeme Smith (2018 International Symposium on Formal Methods) has provided an operational semantics of program behaviour which takes into account optimisations made by hardware architectures including x86-TSO (Intel and AMD), ARM (ARM) and POWER (IBM). The long-term objective is to extend the foundations provided by these logics to cover all effects of hardware, operating systems and compilation processes on information flow analysis, to develop automatic program analysis tools that are built on them, and to integrate those tools within a secure software development process.

A short-term project would be to combine Smith’s semantics with the logic of Murray to provide a basis for program analysis tools which not only work with data-race free concurrent programs but also those relying on low-level non-blocking code, or with design errors leading to data races. Furthermore, semantics would be developed capturing the optimisations allowed by compilers (e.g., as defined for C and C++ by the C11 memory model) to similarly provide a basis for analysing higher-level source code.

Resilient Cyber Systems (High priority)

Military platforms and systems are underpinned by core cyber technologies, including software, hardware, embedded firmware, sensors, and user interfaces. The supply chain for developing and supporting cyber systems cannot guarantee the requisite security for mission critical components.

Specifically, we are interested in how commercial-off-the-shelf (COTS) components of varying levels of trust can be used to construct trusted and trustworthy systems. Research should focus on architectural aspects whereby increased security can be gained through the addition of small amounts of bespoke logic (software or hardware). The simplicity of the architecture, the trusted computing base, and the usability will need to be balanced against the overall cost and efficiency of any proposed research direction.

Proposals are welcomed for collaborative research projects in tools and techniques for ensuring the underpinning technologies for Defence Systems are trustworthy. Preference is for proposals that exhibit a long-term vision, coupled with tangible milestone outputs in the first 12 months. Tangible short-term outputs that can provide mechanisms to support a high assurance platform for the secure seL4 microkernel are encouraged.

Depicting human vulnerabilities towards cyber threats via trust analytics (High priority)

Although most efforts in cyber security focus on purely technical solutions, a significant proportion of attacks and confidential data spills can be attributed, at least in part, to human errors and negligence. Understanding vulnerabilities via human behaviour will be a critical step towards developing mitigation strategies for these vulnerabilities, and hence improving the cyber security posture of stakeholders of significance to national security.

This project should aim to investigate what individual differences and what patterns of user behaviour can predict people’s vulnerability, directly and indirectly, to cyber threats. Specifically, it should focus on a more robust and accurate measurement of trust in different cyber context, based on generic trust profiles, observable human behaviour, captured physiological signals, and means of interaction. The research could consist of two key components: (1) Experiment design for elicitation of trust profiles, including the development of research hypothesis, collection of behavioural (e.g. mouse & eye movement) and/or physiological data (e.g. GSR, BVP) for human trust measurement and calibration, and (2) evaluation of human trust and responses towards cyber threats under different network contexts. These could be validated in lab-based or crowd-sourced user studies involving diverse groups of humans, taking different roles and adopting different means of human-network interaction.

This research should link several key factors including human, trust and cyber security together into a framework. Such a framework is critical for the understanding of human trusting behaviours, deployment of trust methods in cyber scenarios and applications, and further provide effective knowledge regarding how to calibrate human trust, identify human vulnerabilities and avoid human errors in the cyber environment. 

Privacy-Preserving distributed Edge Computing (Medium priority)

The highly distributed nature of Internet of things IoT along with the sheer scale of mobile and ubiquitous computing poses significant challenges providing timely processing and exchange of large amounts of data and enabling security and privacy when collecting and processing data. Presently the implementation of analytics extraction models largely exists within vast cloud infrastructures. However, the approach of edge computing pushing the network computation towards the data-source, the IoT or mobile device is a promising approach for simultaneously addressing security, privacy and efficacy challenges. There are however additional challenges in modelling and design to effectively implement such privacy-preserving edge computing at scale for many real-world scenarios. This research will focus on exploring approaches to optimize the privacy and utility of analytics extraction at the end-user or the networks edge addressing adequate systems design to offer privacy/security to users, personalised AI-based products with low network delay and large-scale data-collection and exchange.

Policy-Defined Networking (Medium priority)

Software Defined Networks (SDN) offer a unique agile platform for measuring, monitoring and empowering distributed networks with control capabilities over the transported traffic. Such an architecture requires however the ability to manage multiple, dynamically changing policies for network access, network security and traffic prioritisation as well as resolving potentially conflicting policies. This research will focus on using the SDN technology to formally verify and enforce that distributed systems apply and conform to a set of policies. This work has application to Defence platforms where there is a need to address cyber worthiness in particular where there are numerous integrated cyber capabilities.

SDN Data Plane Security and extensions to Software Defined Clouds (Medium priority)

Software-Defined Network (SDN) radically changes the network architecture by decoupling the network logic from the underlying forwarding devices. From a security perspective, SDN separates security concerns into control and data plane, where the data plane is composed of networking equipment such as switches and routers specialized in packet forwarding interacting with SDN controllers using the Southbound APIs. This architectural re-composition brings up exciting opportunities and challenges. The overall perception is that SDN capabilities will ultimately result in improved security. However, in its raw form, SDN could potentially make networks more vulnerable to attacks and harder to protect.

This research will focus on identifying challenges faced in securing the data plane of SDN – one of the least explored but most critical components of this technology by formalising potential attack scenarios. For instance recently, it has been shown that attackers could use performance metrics (Input buffer and/or packet processing time) as side channels to infer forwarding policies. Similarly, vulnerabilities include potential exploits of software vulnerabilities (e.g. TCAM memory attacks) to compromise switches or to infer network topologies, or protocol attacks that consist of exploiting network protocol vulnerabilities to craft “fake” flow rules that override the existing rules.

In addition, with the increased adoption of Software defined systems paradigm abstracting the actual hardware at different layers with software components, emerging technologies including Software Defined Cloud (SDCloud) for cloud management are equally vulnerable to compromised forwarding devices threats. This project aims then at establishing the set of requirements to protect the data plane of Software defined systems in a holistic and generic approach.

Formal Mathematical Modelling Environment  (Medium priority)

Defence is rife with systems with critical properties that must be satisfied to ensure safety, security or mission-suitability. A regime of accreditation as `fit for purpose’ has been developed which incorporates evaluation aimed to be commensurate with the level of assurance required.  It is widely-understood that the highest levels of assurance require a formal mathematical analysis of the system and its properties. However, this can be quite difficult to achieve in practice:  current “proof-assistant” technology does not seem to have the appropriate structures to flexibly support large-scale mathematics, let alone engineering. The underlying problem seems to be a lack of flexibility in language support at the core of the tools, leading to an inability to support the typical elision and abstraction in engineering developments (of time, uncertainty, probability) in the consequent computational models.

We are seeking collaborative research proposals that can guide and shape the development of the next generation of formal methods tools capable of attacking such large-scale problems as arise in verifying safety and security in Defence systems. The development of assurance for large scale problems will need to leverage engineering notions such as modularisation, composition/refinement and re-use, and machine support is essential to deal consistently with the complexities that arise.  We are interested in applying such ideas at the most fundamental levels: starting with the language underlying the tool, computational/reasoning structures, and the ability to translate between them. 

One project of current interest is to utilise theory morphisms to ensure consistency for system-modelling development in a rich language such as Isabelle/ZF which, for particular system instances, throws proof obligations to a tightly-controlled context within Isabelle/HOL. In this way it is envisaged that a complete system design and implementation model can be developed in HOL, which has a well-structured and easily understood presentation at the ZF level.

Preference is for proposals that exhibit a long-term vision, coupled with tangible milestone outputs in the first 12 months. 

FPGA Security (Medium priority)

Defence systems, whether bespoke, military off the shelf, or commercial off the shelf contain many FPGAs. Software defined radios and radars, complex processing devices, sensor systems, and weapons systems all contain many commercial or military grade FPGAs. The F35 Fighter jet alone has well over 200 FPGAs within its electronic systems.

The need for these devices to operate securely, i.e., not be susceptible to attacks, either logical or physical, and maintain their intended functionality is important. Further the ability for these devices to be programmed and operated in a secure manner is also important.

We seek research focused on supporting the development of an ecosystem of tools and techniques to ensure correct operation of our FPGA platforms from design through to deployment. Combining extant best practices with research into resilience, robustness, and security of design, areas of interest include:

  • Heterogeneity and replication in both design tools and physical FPGAs
  • Self-checking FPGA designs
  • Abstract FPGA platforms – FPGA abstraction layer or security overlays
  • Protection of FPGA bit-stream and programming
  • Multi-domain analysis (e.g., inter fabric side channel)
  • Hybrid Analysis of FPGA-based hardware/software systems
  • FPGA/soft core processor resilience, voting, redundancy and circuit garbling techniques

Preference is for proposals that exhibit a long-term vision, coupled with tangible milestone outputs in the first 12 months.

Assisted System Decomposition for Vulnerability Assessment (Medium priority)

We are seeking tools and techniques to increase the cyber resilience of COTS/GOTS equipment used by the Australian Defence Forces. The ubiquity and complexity of ICT cyber systems combined with an increased reliance on third-party and closed systems has resulted in a growing requirement for detailed cyber vulnerability assessments.

Our research is aiming to mitigate this complexity through systemic analysis. A large component of this problem is being able to understand how a system operates and adequately map its ICT components and their interactions. Research is required to develop automated techniques for performing logical system decomposition. The goal is to gather open-source and derived information on systems of interest and propose a manner to decompose those systems. An example is understanding the operation of a modern commercial off the shelf radio system. The system decomposition should expose the low-level information processing (CPUs, DSPs, FPGAs), storage (RAM, Flash, disk, network), and communication (protocols, networks) mechanisms and their interconnectivity from which other tools can then suggest specific vulnerability analysis methods.

Preference is for proposals that exhibit a long-term vision, coupled with tangible milestone outputs in the first 12 months.

Cyber-Enabled Information Warfare (Medium priority)

In recent communications, Herbert Lin and Jackie Kerr have proposed and discussed the concept of cyber-enabled information/influence warfare and manipulation (also IIWAM). They define IIWAM as “the deliberate use of information against an adversary to confuse, mislead and sometimes influence the choices and decisions the adversary makes”, and explain how cyber-enabled IIWAM exploits modern communication technologies, the information environment and our cognitive and emotional responses to its advantages.

Information manipulation is not a new phenomenon. It has been studied extensively, in particular, in relation to the production of deceptive messages and the use of deceptive communication strategies for personal, ideological, political and commercial purposes. Although communication is founded on the presumption of ‘truth’, and, interactions are based on assumptions people make about the quantity, quality, manner and relevance of the content communicated to them, it is not unusual for communicators to alter (e.g., overstate, minimise), misrepresent, conceal or fabricate information and facts. What distinguishes IIWAM from traditional ‘propaganda’ and deception techniques is the scale at which it has been deployed, the targets it seeks to influence and the impact it had on the information environment in recent years. The development of computational approaches and tools to detect and deflect IIWAM operations is needed.

We are seeking submissions related, but not limited to:

  1. The automatic detection of deceptive messages (including fake news). Approaches based on the Cooperative Principle have been proposed offering a typology of verbal deceptions. These approaches focus on the message itself (i.e., what is said)
  2. The systematic identification and characterisation of deceptive communication strategies and what motivates them (e.g., inject fear, promote anger, divide communities, swing opinions). Approaches have focused on the speaker’s intention and the means used (i.e., assumptions, beliefs and goals) to pursue deceptive communication behaviours.
  3. The development of techniques and approaches to information manipulation and deceptive behaviour attribution Modern means of communication makes it difficult to trace deceptive behaviour back to their originators. People can produce information content, adopt stances and express feelings indirectly or anonymously. Whereas originators and authors can sometimes be clearly identified, attribution can be obfuscated or purposefully misleading.
  4. The identification of direct and indirect targets of IIWAM as well as the means used to reach the targeted audience (i.e., physical, informational and psychological). William Biddle in his psychological analysis of propaganda was one of the first to emphasise how emotions could drive individuals to follow particular behaviour.
  5. The development of techniques and approaches to countering IIWAM by preventing, detecting and responding to deception and confusion methods.

Proposals will need to clearly articulate the framework adopted, the data analysed and used to validate the approaches and models.

Selection Process

In order to be considered, all proposals must be accompanied by a completed covering sheet, in the template provided.

Proposals are to be submitted by 4.30pm Australian Eastern Daylight Time (AEDT), 6 August 2018.  Only projects submitted via email to by the above deadline will be considered in this round.

Proposals submitted will be assessed equally against the following criteria:

  • Alignment to Defence strategy and the project priorities articulated in this document
  • Future science criticality
  • Collaboration depth (e.g. Collaboration with DST staff, Data61 staff, other universities, an industry partner, etc.)
  • Delivery of outcomes (e.g. the ability of the proposal to deliver the agreed outcomes and milestones).
  • Game changing potential to Defence

Please limit submissions to no more than 2000 words. Ensure that all contact details, current and potential DST, Data61 collaborators and/or research partner details are on a separate page/covering sheet. The proposals will be de-identified during the selection process to eliminate any potential conflicts of interest.

Defence and Data61 reserves the right to fund all, some or none of the proposals received under this Call for Applications.

Conditions of Award

  • This opportunity is open to all registered Australian Universities and Australian Publicly Funded Research Agencies.
  • Successful applicants must be able to meet the milestones and timelines outlined in their submission.
  • Successful applicants must enter into a Data61 University Collaboration Agreement.
  • Successful applicants will enter into the appropriate contracting arrangement within 3 weeks of announcement.

Contracting

Successful applicants will be required to enter into a Data61 University Collaboration Agreement and a subsidiary Collaborative Research Project Agreement with Data61 in order to access project funding. Data61 will enter into contracts with the lead party in each proposal.

Any IP generated as part of the projects will vest in Data61 unless otherwise agreed, and Defence will receive a license for Commonwealth purposes only.

Any Commonwealth funding contributed to the projects will be paid in accordance with successful completion of milestones and as negotiated by the parties. Where circumstances necessitate it is possible for a small payment to be made upon execution of the agreement and in accordance with Defence procurement rules.

Further Information

For further information or assistance, please contact:

Key Dates

EventDate
Call for Applications 11 July 2018
Applications Open 11 July 2018
Applications Close 4:30pm AEDT, 6 August 2018

Key information

Key dates
Applications open: 11 July 2018
Applications close: 4:30pm AEDT, 6 August 2018

Further information

Download Application

Attached files