Network security and analytics

Key challenges for modern cyber defence are the growth in attacks, the volume of monitoring data from networks, servers and firewalls, and shortage of skilled staff to detect these attacks in the huge volume of data that is available. These challenges are driving the demand for better techniques for machine learning in cyber security – an emerging field known as security analytics.

The University of Melbourne has two decades of experience in developing scalable and robust machine learning techniques to detect a wide variety of cyber attacks. Our expertise and experience spans fundamental research into new machine learning techniques for attack detection from diverse sources of data, as well as deep understanding of the underlying properties of attacks that can aid in their detection. In particular, we have extensive experience in deploying security analytics in a range of practical environments, including high speed backbone networks, different types of access networks, and resource-constrained devices in industrial control systems and the Internet-of-Things (IoT).

Researchers associated with the group focus on:

Anomaly detection and contrast mining

Our group has developed a range of deep learning techniques for detecting new and unusual events. This includes detecting unusual changes in network traffic, as well as characterising the emerging changes that may be symptomatic of malicious activity.

Example paper: S Erfani, S Rajasegarar, S Karunasekera, C Leckie, High-dimensional and large-scale anomaly detection using a linear one-class SVM with deep learning, Pattern Recognition 2016 (winner of Pattern Recognition Journal Best Paper Award).

Adversarial machine learning

Increasingly sophisticated attackers know they are being monitored, and can modify their behaviour to poison the training of machine learning methods for attack detection. Our group is developing techniques to identify these adversarial attacks on machine learning systems.

Example paper: Y Han, B Rubinstein, T Abraham, T Alpcan, O De Vel, S Erfani, D Hubczenko, C Leckie, P Montague, Reinforcement Learning for Autonomous Defence in Software-Defined Networking, GameSec 2018.

Privacy-preserving machine learning

To preserve the privacy of users’ data, there is a growing need for machine learning methods that can analyse encrypted data to detect specific patterns of interest. In this way, important trends can be detected from shared data without the need to decrypt data from individual users.

Example paper: Z Zhang, B Rubinstein, C Dimitrakakis, On the differential privacy of Bayesian inference, AAAI 2016.

Key research projects and contracts

Adversarial Machine Learning for Cyber

This project is in collaboration with DST Group and Data61 with the support of the Next Generation Technology Fund of Defence. We are understanding the effects of adversarial attacks on reinforcement learning systems that protect complex networks, and developing robust learning methods that are resilient to such attacks.

Detecting IoT Botnets in Smart Homes

This project is in collaboration with the Oceania Cyber Security Centre. We address the challenge of how an ISP can detect rogue IoT devices within home networks, and process the large volumes of traffic to detect these rogue devices. We are developing scalable machine learning techniques for anomaly detection that can automatically model normal behaviour and detect abnormal behaviour due to infected IoT devices.

Enquiries