Research

We work with wide range of industries, ranging from cyber security and defense to finance and smart cities.

The Lab provides a set of practical testbeds for evaluating AI algorithms such as machine learning algorithms. Such evaluation includes assessment of the appropriateness of training datasets; evaluation of the robustness of AI algorithms to varying or unexpected inputs; assessment of the privacy levels of AI models; resilience of AI systems to adversarial attacks; competency of AI algorithms to handle new scenarios.

Facilities

We provide a set of practical testbeds for evaluating Machine Learning (ML) models:

  • Explanation testbed: generating human-interpretable explanations of ML models.
  • Privacy testbed: generating examples to test resistance of ML models to attacks that extract private information encoded by the model from the training data.
  • Adversarial testbeds: generating adversarial examples to test resistance of ML models to attacks or distribution shifts.
  • Variation testbeds: generating examples to test the robustness of ML models to new types of inputs.

The Lab’s capability is driven by approximately 40 academic staff at the University of Melbourne who are working directly in AI, machine learning or their interface to areas of computing such as software engineering, business processes, human computer interaction and cyber security.

Explainable AI

As AI has becomes more ubiquitous, complex and consequential, the need for people to understand how decisions are made and to judge their correctness, fairness, and transparency, has become increasingly crucial due to concerns of ethics and trust.

The field of Explainable AI (XAI), aims to address this problem by designing intelligent agents whose decisions and actions building up to these decisions can be easily understood by humans.

Truly explainable AI requires integration of the technical and human challenges. To make progress, we need a sophisticated understanding of what constitutes “explanation” in an AI context; one that is not merely limited to an articulation of the software processes, but explicitly considers that it is ultimately people that will consume these explanations. The research into explainable AI in our research laboratory at the University of Melbourne is aiming to build explainable AI from these very principles.

We aim to take a human-centered approach to XAI, explicitly studying what questions people care about for explanation, what makes a good explanation to a person, how explanations and causes can be extracted from complex and often opaque decision-making models, and how they can be communicated to people.

This research effort is a collaborative project involving computer science, cognitive science, social psychology, and human-computer interaction, treating explainable AI as an interaction between person and machine.

Privacy-enhancing technology

We explore technical solutions to enable data analysis with strong security and privacy guarantees – critical when using running machine learning or a database over sensitive data. Our research areas include differential privacy for strong privacy guarantees when releasing an output of data analysis (eg, through statistics about the data or a machine learning model trained on it), confidential data processing (eg, by relying on cryptographic techniques and Trusted Execution Environments), privacy risk analysis, and questions of privacy, security and verification that arise when computing over multi-party data.

We engage across industry and government, with past research contracts with the Australian Bureau of Statistics, Facebook, the Office of the Victorian Information Commissioner, and Transport for New South Wales as recent examples.

Adversarial Machine Learning

AI and ML techniques have made great advances in automating the detection of malicious behaviour in data-intensive applications such as cyber security, defence, fraud detection and border security. For example, ML techniques such as anomaly detection can detect abnormal behaviour that may arise due to malicious activities, such fraudulent financial transactions. However, a major challenge to using ML in this context arises when the adversaries we are trying to detect know that they are being monitored, and thus modify their behaviour to “poison” the training of the ML models. This challenge has motivated research into Adversarial Machine Learning, which aims to develop robust ML techniques that a resilient against the attempts of intelligent adversaries to manipulate the ML-based systems. In particular, there is a need for assurance platforms to test whether a given ML system is vulnerable to these types of adversarial attacks.

The University of Melbourne (with support from Defence’s Next Generation Technology Fund) has developed a range of techniques for assurance of ML systems in adversarial environments. We have developed a variety of new technologies to test the susceptibility of a given ML system to adversarial attacks, such as contaminating training data, crafting adversarial test cases that can fool a system, or generating privacy attacks that can extract private information from an ML model. This has led to practical tool sets that can be used for assurance testing of ML models against adversarial attacks in practical applications, such as cyber security, defence and fraud detection.

The AI Assurance Lab draws upon the extensive practical and theoretical experience of the Faculty of Engineering and Information Technology. It builds on our research collaboration with organisations such as the Defence Science and Technology Group on using Adversarial Machine Learning for Cyber Security. It also draws upon the expertise of the University of Melbourne Academic Centre for Cyber Security Excellence, which is one of two such centres funded by the Commonwealth Government to advance research and education in Cyber Security, especially in the use of AI and ML for security analytics in adversarial environments.

Digital Ethics

Advances in digital technologies have increasing ethical challenges which needs to be considered by stakeholders, including the scientists and engineers involved in their development.

Centre for Artificial Intelligence and Digital Ethics

Combining expertise from Melbourne Law School, the Faculty of Engineering and Information Technology, the Faculty of Arts, and the Faculty of Science, the Centre for Artificial Intelligence and Digital Ethics (CAIDE) bring interdisciplinary insights to AI and digital ethics with a uniquely Australian focus.

The activities of CAIDE revolve around four themes: fairness, accountability, transparency and privacy.

CAIDE is leading research, teaching and engagement on these themes, addressing each in the context of challenges and opportunities presented in Australia. The AI Assurance Lab activities are being shaped by ethical principles developed by collaborators at CAIDE. This includes consideration of important issues such as fairness and anti-discrimination, auditing and transparency, accountability and governance, and consent and data privacy.