Explainable artificial intelligence

A.I. and autonomy lab project

Project description

As AI becomes more ubiquitous, complex and consequential, the need for people to understand how decisions are made and to judge their correctness, fairness, and transparency, becomes increasingly crucial due to concerns of ethics and trust.

The field of Explainable AI (XAI), aims to address this problem by designing intelligent agents whose decisions and actions building up to these decisions can be easily understood by humans.

Truly explainable AI requires integration of the technical and human challenges. To make progress, we need a sophisticated understanding of what constitutes “explanation” in an AI context; one that is not merely limited to an articulation of the software processes, but explicitly considers that it is ultimately people that will consume these explanations.

The research into explainable AI in our research laboratory at the University of Melbourne is aiming to build explainable AI from these very principles.

We aim to take a human-centered approach to XAI, explicitly studying what questions people care about for explanation, what makes a good explanation to a person, how explanations and causes can be extracted from complex and often opaque decision-making models, and how they can be communicated to people.

This research effort is a collaborative project involving computer science, cognitive science, social psychology, and human-computer interaction, treating explainable AI as an interaction between person and machine.

Researchers

Chief investigator

The project is lead by Professor Timothy Miller.

Staff

Dr Ronal Singh

Professor Liz Sonenberg

Professor Frank Vetere

Dr Eduardo Velloso

Dr Mor Vered

Graduate researchers

Abeer Alshehri

Henrietta Lyons

Prashan Madumal

Research participation

Relevant research collaborations include:

  • Professor Paul Dourish, Honorary Senior Fellow in Computing and Information Systems, University of Melbourne; Professor of Informatics, Donald Bren School of Information and Computer Sciences, UC Irvine.
  • Associate Professor Piers Howe from the Melbourne School of Psychology.

Workshops

IJCAI/ECAI 2018 Workshop onExplainable Artificial Intelligence (XAI)

Publications

Madumal, Prashan, Tim Miller, Liz Sonenberg, and Frank Vetere. "Explainable reinforcement learning through a causal lens." In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 03, pp. 2493–2500. 2020.

Vered, Mor, Piers Howe, Tim Miller, Liz Sonenberg, and Eduardo Velloso. "Demand-Driven Transparency for Monitoring Intelligent Agents." IEEE Transactions on Human-Machine Systems 50, no. 3 (2020): 264–275.

Miller, Tim. "Explanation in artificial intelligence: Insights from the social sciences." Artificial Intelligence 267 (2019): 1–38.

Madumal, Prashan, Tim Miller, Liz Sonenberg, and Frank Vetere. "A Grounded Interaction Protocol for Explainable Artificial Intelligence." In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pp. 1033–1041. 2019.

Gunning, David, Mark Stefik, Jaesik Choi, Timothy Miller, Simone Stumpf, and Guang-Zhong Yang. "XAI—Explainable artificial intelligence." Science Robotics 4, no. 37 (2019).

Miller, Tim, Piers Howe, and Liz Sonenberg. "Explainable AI: Beware of inmates running the asylum or: How I learnt to stop worrying and love the social and behavioural sciences." arXiv preprint arXiv:1712.00547 (2017).

Funding source

ARC Discovery Projects (Grant ID: DP190103414). Defence Science and Technology Group Microsoft Research.

Contact