Deceptive AI

Deceptive AI

Can computers deceive people? It is clear that computers be used as tools for people to deceive each other (fake news, phishing, etc), but is it possible for a specially designed AI agent to engage in strategic deception? In other words, can a machine devise and enact deeply deceptive strategies against humans by reasoning about their perceptions, beliefs and intentions? In what kind of human-machine encounters might this be possible? What would be the nature of the machine's computational and cognitive architecture? How do people understand the possibilities of such machine deception and how do they react to it?

We are a team of computer scientists, psychologists, and magicians who are collaborating to explore these questions. Our methodology is to formalize the techniques of deception used by stage conjurors (for example see Kuhn, Olson & Raz, 2016) such that they can be built into the thinking processes of software agents, and to test the deceptive powers of these agents when playing computer games against humans (See Smith, Dignum & Sonenberg, 2016). The project will shed light on what it means for a computer to intentionally deceive people, and provide insights into the capabilities of software agents to deploy advanced 'theory-of-mind' reasoning in human-machine encounters.

Research Team

Project Information

Funding Source ARC Grant DP180101215 'A Computational Theory of Strategic Deception'
Project Timeframe 2018-2020

Contact Details

Dr Wally Smith
School Computing & Information Systems

Email
wsmith@unimelb.edu.au