Masters Projects

Here is a collection of Human-Computer Interaction (HCI) research projects and software development projects proposed by researchers in the Interaction Design Lab. These projects are available to all masters students and especially relevant to students undertaking the HCI stream of the MIT. For more details about a particular project, students should contact supervisors directly.

Eye Tracking and Gaze Interaction

Supervisor: Eduardo Velloso

  • Gaze Interaction for Public Displays

    Though interactive public displays offers a lot of exciting opportunities for retail and restaurants, users are often worried that touching them might be unhygienic and sometimes don’t even know that they are touch-sensitive. In this project, you will build a public screen controlled by the user’s eyes. Example applications include a smart menu for a coffee shop, a clothes selector for a fashion shop, etc. See an example of how Pizza Hut implemented it here: https://www.youtube.com/watch?v=HRFn32N7KFY

    Expected background: User interface design and implementation, C# programming

  • Look and Touch Interaction for Touchscreens

    Even though by now most people are used to multitouch technology, there is still a lot of room for expanding the available vocabulary of touch-based interactions. In this project, you will explore how to augment traditional touch gestures with eye tracking. The opportunities are endless! See an example here: https://www.youtube.com/watch?v=BuvgizcmQuk&t=3s

    Expected background: User interface design and implementation, C# programming

  • Gaze-Reactive Magic Books for Children

    By monitoring how our eyes move, eye trackers enable us to create books that know how they are being read. In this project, you will build an interactive experience that converts a classic children’s story into a magical book that changes as the child reads it. Examples of interesting types of interaction include triggering actions when a certain passage is read (e.g. playing a sound effect, changing an image, etc), automatically scrolling the text as it is being read, etc. See an example here: https://www.youtube.com/watch?v=8QocWsWd7fc

    Expected background: User interface design and implementation, Javascript, Web programming.

  • Gaze Interaction for Smart Watches

    The goal of this project is to build prototypes of smart watch applications that the user can control by using solely their gaze. You will use a wearable eye tracker to monitor the user's eye movements and explore how they can be incorporated into the design of gaze interactions on a smart watch. See an example of our previous work here: https://www.youtube.com/watch?v=KEIgw5A0yfI

    Expected background: Strong programming skills, combining data analysis with human-computer interaction.

Gesture and Body

Supervisor: Eduardo Velloso

  • Classifying gestures through time series analysis

    Pen and finger gestures are becoming an important part of user interaction with new devices. Integrating application-specific gestures requires user interface prototypers to know much about gesture recognition to choose/design the right gesture for designing interaction with applications to achieve high user satisfaction in both ease of use and accuracy. In this project you explore the use of three most important metrics in time series analysis to classify the available gesture data; ShapeBased, DTW and Geometric. You will develop the right machine learning technique that can learn and detect gestures from user data. You will focus on 3 most important aspect of machine learning challenges: Learning speed, Classification Speed and Validity.

    Expected background: Required: Programming skill with Python, Familiarity with machine learning; Recommended: Familiarity with SciKit Learn

  • Analysing Weight Lifting Exercises

    In this project you will build a system for a smart weight training room. The system will include a Kinect sensor and a large screen that will assess the quality of an exercise performance. The project will consist of integrating machine learning algorithms into this 3D system to provide feedback to weight lifters. See an example here: https://youtu.be/G-cZ1OgwTbE

    Expected background: Machine learning, python, C#, Unity.

Crowd and Mobile Sensing

Supervisor: Jorge Goncalves

  • Self-control and temptation in smartphone usage

    This project explores the well-known problem of (unwanted) smartphone usage during moments where focus is required. For example, incoming notifications or the desire to look through your social media applications may distract you from your exam preparations. Based on theory from behavioural economics, we know that people are very good in stating their future needs (e.g., I really want to focus on studying tomorrow) but when the moment is actually there it becomes much more difficult to focus on this task (e.g., checking Instagram frequently). You will develop an Android smartphone application which prevents users from opening user-defined applications during user-defined timeslots. For example, a smartphone user may wish to disable access to Facebook and the browser between 9 to 12. The application should contain a clear user interface and log details on smartphone application usage both inside and outside the defined hours. Following development of the app, you will run a user study in which the application will be installed and used by a group of students. The application will collect data on the usage of both the application and general smartphone usage. The results of this user study will be analysed and discussed in the report.

    Expected background: Programming (Android development), Previous experience in conducting user studies is beneficial but not required.

  • Interactive web application on Demographics

    This project involves the creation of an interactive website connected to a variety of data sources containing demographic information (United Nations, World Bank, etc). The intended user group of the website are researchers, who will be able to compare their participant sample to a larger population. For example, a researcher from Australia can enter demographic information (age, income, etc.) from her study participant sample and see how these results compare to the rest of the country. The website will allow the user to filter and make ‘sub-selections’ based on the available data. This will allow the researcher to answer such questions as ‘are my participants richer or poorer than the average Australians at this age?’. You will develop this interactive website in close collaboration with the supervisors. While the core idea is already defined (see above), there is room for additional ideas to be added to the project. A challenging aspect will be to connect to the various data sources and make sure that the user can seamlessly interact with all available data. Usability of the application is another key concern.

    Expected background: Web programming, Interactive applications (R / Shiny recommended, but other techniques possible), Connect to external data sources, Understanding of databases, Some experience in designing UIs.

  • Improving data visualization through crowdsourcing

    Effective data visualization is vital in both business and scientific communities. Although a great variety of tools are available which allow people to create rich illustrations, often people fail to select suitable visualization methods and parameters, resulting in loss of information or misinterpretation. This project aims to address this issue and help users create better visualizations through insights provided by crowdworkers. In this project, you will develop a web application that will generate basic graphical visualizations and allow users to change its appearance based on parameters such as chart type, colour scheme, label size etc. The application will be connected to a crowdsourcing platform and the different preferences provided by crowdworkers will be used to decide on the optimum parameters for each chart. It is important to keep the design simple so that any amateur user is able to interact. The scope of this project is limited to developing the application and demonstrating its applicability. However, if the student is comfortable, it could be expanded to analyse the importance/impact of various parameters.

    Preferred background: Required: Programming skills (Python, JavaScript or R); Recommended: Experience with data visualization using packages/libraries such as matplotlib, D3.js or ggplot

  • Using crowdsourcing for stream processing applications

    Crowdsourcing is widely used for tasks that require human expertise. Often these tasks are crowdsourced batch-wise and not on a recurring or continuous basis. Using crowdsourcing for stream processing (e.g. content moderation, emergency response, surveillance) is challenging as it introduces additional complications. In this project, you will develop a web application that listens to a data stream and automatically publishes tasks to a crowdsourcing platform (e.g. Amazon Mechanical Turk). You will need to optimize your application so that you obtain a timely and reliable response for each task. Following the deployment, you will analyse the data, to explore the variation in performance over time, relationship between accuracy and time taken to obtain the response etc.

    Expected background: Required: Programming skills (Python, JavaScript or Java); Data Analysis skills (Python or R); Recommended: Experience with AWS SDK

  • Community’s Mood Assessment: Analysis of surveillance data

    This project explores if a community’s mood can be determined from its walking speed. It has previously been shown that individual’s emotional state could be determined from their gait (body posture, movement features, etc). The idea of this project is to determine a community’s mood from walking speed of its members, and to link it to local, semi-local and global events. You will be required to work with an open source project developed by researchers from CMU called Openpose (https://github.com/CMU-Perceptual-Computing-Lab/openpose). You will need to analyse surveillance videos using the provided tool. This project is focused on data analysis.

    Expected background: Programming (R, C#), Strong statistical and analytical skills, Independent decision making

  • The effect of situational visual impairments on mobile interaction

    This project investigates the effect of situational visual impairments on mobile interaction. The term situational impairment is used to describe the relationship between the user, the task the user is performing, the user’s surrounding environment, and the technology used to perform the task. In this study we define wearing sun glasses as a situational visual impairment and would like to examine its effect on performance during completion of common smartphone tasks such as target acquisition, visual search, and text entry. You will be provided an android software to measure the magnitude of the visual situational impairment on mobile interaction. You might be required to apply minor changes to the software. However, the main focus of this project is to conduct a user study and perform data analysis. The results of this user study will be analysed and discussed in the report.

    Expected background: Ability to conduct user experiments, Programming (R, Android), Strong statistical and analytical skills, Independent decision making

  • Mood Inference Literature Review

    You are required to do an extensive literature review on mood inference. Mental wellbeing plays a profound role in people’s health and quality of life. Mood tracking using various technologies is an active research topic. A core challenge is how to accurately and reliably measure mood data with the help of various technology in-the-wild. The goal of the project is to create a thorough and detailed literature review on mood inference in-the-wild. The literature review should provide a detailed summary of all previous related research on the the topic, highlighting their strength and weakness.

    Expected background: Strong writing skills; The ability to synthesise concepts from the literature; Interest in the research topic, Independent decision making

Smartphones for Science

Supervisor: Vassilis Kostakos

  • Web Application for Creating Smartphone Studies

    This work will contribute to a global open-source project led by the University of Melbourne (http://www.awareframework.com). The overall project aims to make it easy to conduct experiments using smartphones, and to collect sensor data from smartphones. Your role will be to improve an existing website (http://create.awareframework.com) written in Javascript/NodeJS. The website is used by scientists to define the experiments they want to conduct. It allows scientists to define questionnaires, and define which sensor values trigger certain questionnaires on the phone (e.g. launch a questionnaire whenever the user runs the Facebook app). You will only work on the front-end, making sure that the website is usable and stable. Your work will help a variety of scientists who are using this tool, including medical doctors, psychologists, epidemiologists, sociologists, education experts, and computer scientists.

    Expected background: Javascript/NodeJS, Databases, scripting and data wrangling, ability to conduct interviews.
    Preferred background: Usability engineering, CSS, interaction design.

  • Visualisation Dashboard for Smartphone sensor data

    This work will contribute to a global open-source project led by the University of Melbourne (http://www.awareframework.com). The overall project aims to make it easy to conduct experiments using smartphones, and to collect sensor data from smartphones. Your role will be to develop an application using R Shiny Dashboard to visualise smartphone sensor data stored in a MySQL server. You will work closely with scientists to identify the requirements for the visualisation tool. Then, you will implement the tool to visualise the sensor data in a way that is suitable for scientists. Your work will help a variety of scientists who are using this tool, including medical doctors, psychologists, epidemiologists, sociologists, education experts, and computer scientists.

    Expected background: Databases, scripting and data wrangling, some statistical or numerical analysis, ability to conduct interviews.
    Preferred background: Knowledge of R and Shiny Dashboard is preferred but not necessary.

  • Android Visualisation App for Smartphone Sensor Data

    This project involves the creation and evaluation of two small applications. Both applications will be evaluated, asking participants a set of questions throughout the day for a period of 2 weeks. The first application is a chat bot (e.g., Facebook messenger). The bot is configured to ask participants a set of questions at predefined timeslots through the chat application installed on participants phones. The second application is a native Android application, running in the background of the participants phones. The application will ask the same set of questions at predefined timeslots, but instead through the native Android or iOS application. Following a user study, you will compare the results of both applications. Interesting questions are for example: what is the difference in response time between the bot and the Android/iOS application, how many questions went unanswered between the bot and the application, etc. In order to answer these questions, you will need to store all relevant information (e.g., in an online database).

    Expected background: Programming mobile applications, Collecting and storing data in databases, Previous experience in programming chatbot is beneficial but not required, Previous experience in conducting user studies is beneficial but not required.

Ageing and Technology

Supervisor: Jenny Waycott

  • Scoping review of emerging technologies in aged care

    This project aims to produce a critical analysis of the current state-of-the-art of emerging technologies that are being used to enrich the lives of the oldest old (those aged over 80). Technologies like virtual reality, social robots, and gesture-based gaming are now being used in a range of aged care settings to provide social and emotional enrichment for older adults. In order to inform further development in this space, it is important to map current uses and collate existing evidence about the effectiveness of these interventions. This project will involve conducting a systematic literature review of scholarly research in this area. It would suit a 25-point MIS project.

    Expected background: Strong writing skills, interest in the topic, and ability to critically review and synthesise academic literature.

  • Virtual reality in aged care

    This project aims to identify current uses, benefits, and challenges of virtual reality as diversional therapy in aged care. There are now several vendors offering virtual reality experiences especially designed for people with dementia or people in advanced old age. However, there is limited scholarly research examining the opportunities and challenges associated with deploying virtual reality in residential aged care settings. This research will involve conducting surveys and interviews with aged care staff to determine how virtual reality is currently being used and to identify any ethical or social challenges that might prohibit its effectiveness in this sensitive setting. It would suit a 50-point MIS project

    Expected background: Strong writing skills, interest in the topic, knowledge of qualitative data collection and analysis methods.

  • The design and use of social robots as companions for older adults

    Social robots and robotic pets (e.g., Paro the seal) are now being used to provide companionship for people in advanced old age. This project aims to examine whether robotic companions can foster security and emotional wellbeing among older adults and investigates the ethical and social challenges associated with this emerging technology. The project could involve a systematic review to identify scholarly research on the ethical issues associated with deploying robotic companions; surveys or interviews with care providers; or an observational study of the robotic companion in use (subject to approval from the university’s ethics committee). It would suit a 50-point MIS project.

    Expected background: Strong writing skills, interest in the topic, knowledge of qualitative data collection and analysis methods.

  • Designing for companionship

    This project aims to understand the communication and companionship needs of older adults who live alone and to identify how those needs can be addressed through the design and use of new technologies. The project will involve analysing data from interviews with older adults and aged care providers about older adults’ companionship and communication needs, preparing design guidelines, and possibly developing low-fidelity prototypes for design concepts that respond to these guidelines. It would suit a 25-point MIS or MIT project.

    Expected background: Strong writing skills, interest in the topic, knowledge of qualitative data collection and analysis methods.

Context, Games, and Reading in VR

Supervisor: Tilman Dingler

  • Face Race: A Competitive Bio-signal Game Using Grimaces and Facial Heat

    The human face is one of the most expressive parts of our body. While it implicitly reveals our emotions and feelings, we use it to explicitly communicate through facial expressions and grimaces. While we commonly use such expressions in our conversa-on, as an expressive input mechanism it is highly underutilized in human-computer interaction. In this project, we will explore the use of standard and thermal cameras to create playful interactions through facial gestures and facial heat signatures in games and applications. The scope of the project covers the following: 1. Investigating facial interactions based on cameras and thermal sensing. 2. Design and implementation of facial gestures and different facial heat areas as an input modality. 3. Design and implementation of a simple, multiplayer game with facial and heat gestures being the main input. 4. Conducting a user study with 8 participants who evaluate the game with regard to its usability, novelty, and fun factor. The outcome of this project is a comprehensive literature review of the use of facial gestures in human-computer interaction, a software that senses a range of facial gestures and heat signatures, a game which utilizes that input, and a final report about the project.

    Expected background: Programming experience with a platform of choice (iOS, Android, C#, Objective-C, or Swift), 2D or 3D graphics programming.

  • VR Books: Gaze Tracking and Adaptation of Reading Ambience in VR

    Reading is one of the most common and prominent ways to acquire knowledge but is also taken up as a leisure activity. While text tends to lead a rather static life on paper pages and screens, virtual reality (VR) allows us to adapt the reading ambiance according to the text content and underlying mood. In this project, we will use a FOVE VR headset, which allows us to track the user’s gaze in VR. Hence, the system knows the current text posi7on and can adjust the virtual environment (background visuals and sounds) accordingly in order to create an immersive reading experience. The scope of the project covers the following: 1. Investigating gaze interaction in VR as well as User Interfaces for reading. 2. Design and implementation of an adaptable (visuals and sounds) reading room in VR. 3. Implementation of a text reading interface, which uses eye gaze tracking to determine the reader’s text position and triggers changes in the environment. 4. Conducting a user study with 8 participants who evaluate the reading experience with regard to aspects, such as comprehension, likeability, and immersion. The outcome of this project is the design and implementation of a VR application that uses eye gaze tracking to adjust the ambiance to the currently read content. A report summarizing the development process, the user study, and its findings will be required as a final deliverable.

    Expected background: Experience with Unity is highly recommendable.

  • Mobile Toolkit to Assess the Effect of Usage Context on Smartphone Interaction

    This project aims to explore the effects of context on interaction with smartphones in everyday life. Contextual factors, such as ambient noise, users’ stress levels, and mood affect how people interact with their mobile devices. Collecting data about usage context can, therefore, be used to 1) build context detection algorithms and subsequently 2) inform smarter interfaces to accommodate them. We will provide an existing mobile toolkit to collect ground truth on interaction performance, which will need to be integrated into an app that triggers the 3-task battery (touch accuracy, visual search, and a typing task) at different times of the day. The scope of this project covers the following: 1. Development of an Android app for collecting context data, such as ambient noise, lighting, and app usage, using smartphone sensors. 2. Implementation of a notification scheduler to remind users to complete the task battery at different times of the day. 3. Implementation of a local storage to save the context (sensor data) and task performance data on the device 4. Implementing a transmission protocol to a logging server, which takes care of sending the collected data when connected to WiFi (the server itself along with data logging service will be provided). 5. Conducting a user study with 12 participants who install the app on their device to collect data for later analysis over the course two weeks. You will work on application development in close collaboration with the supervisors. The final deliverable of this project is a working software and a report.

    Expected background: Programming (Android, server communication), Independent decision making.

  • A Desensitized Keylogging Framework for Studies in-the-wild

    People’s alertness, attention, and vigilance are highly variable and subject to systematic changes across the day. These fluctuations—in part caused by circadian rhythms—impact higher level cognitive capacities, including perception, memory, and executive functions. Current computer systems rarely take these fluctuations into account and often overburden or bore the user as a result. To assess the diurnal rhythms of alertness and associated changes in cogntive functioning, this project aims at building a series of keyloggers to track people’s typing behavior across the day. Typing speed and error rates can be used as predictors of alertness and fatigue, so a system that monitors users’ typing behavior is capable of unobtrusively detecting moments of high and low user alertness. The keylogger will collect typing characteristics that are sensitive to user’s privacy, hence we will investigate a number of metrics that can safely be stored and transmitted to a server for logging purposes without compromising the typed content.

    Expected background: Programming experience with the respec(ve plaeorm (iOS, Android, C#, Objec(ve-C, or SwiL), Ability to implement an HTTP POST request sending JSON data to a server.

Libraries

Supervisor: George Buchanan

  • Placemarking in Library Browsing

    This project aims to help us better understand how people keep track of multiple books when browsing library shelves. The work expands on research being done at the State Library of Victoria by Dana McKay, and there are some existing ideas as a starting place for understanding what is happening. You would have to conduct observations of library users as they browse, and analyse their behaviour afterwards, to see what the most common patterns are. Would suit a 25 or 50 credit MIS project.

    Expected background: Qualitative research, understanding of observation methods, good writing skills.

  • Analysing BookCrossing Data

    BookCrossing is a platform for exchanging books in public places : books are left in places where they can be found and readers record which books they read. After a book is read, it should be left in another place to be discovered by other readers. There is publicly available data from 2004 on when and where books were read. This can be cross-referenced with library data, e.g. data from an organization called OCLC, to understand patterns of reading and use of the bookcrossing books. Would suit a 25 or potentially 50 credit project.

    Expected background: Data analysis, quantitative evaluation, scripting.

  • Developing A Virtual Bookshelf

    Virtual bookshelves reproduce books on library or study bookshelves, with an image of a shelf filled with books. There are some previous ones, but now it is possible to create one from everyday components. The aim would be to develop an interactive virtual shelf using Amazon book cover data and other features of the Amazon API. Would suit a 25-credit project.

    Expected background: Good coding experience on e.g. C#, Objective C or Java; relevant GUI programming knowledge.

  • Ebook vs Print Book Usage

    There are various sets of data on ebook and print book usage from public and university libraries that are readily available. We don’t understood is what the differences in use between print and electronic book collections are. If we did, we could better understand shifts in general behaviour, and plan for future needs. In this project you will analyse some of the data to describe the differences and refer to the literature to understand what the consequences of your findings are. Could be a 25 or, ideally, a 50 credit project.

    Expected background: Quantitative evaluation, data analysis, scripting.

Projection Mapping

Supervisor: Hasan Ferdous

  • Augmented Studio

    This project builds on building an integrated platform for our “Augmented Studio” project. In its current form, Augmented Studio uses projection mapping to project anatomical information (muscles, skeleton, blood circulation) on human body live; it also shows the same information in a screen. In this project, we aim to develop a tablet interface, a web interface, and a virtual reality interface to show and interact with the projected information. We will develop a client-server architecture for the system to ensure scalability and reduce delay to make it suitable for classroom environment.

    Expected background: Strong programming skills (C#), Experience with Unity platform, app development for mobile platform.

Face Analysis

Supervisor: Niels Wouters

  • Understanding public perception towards artificial intelligence

    In this project, we investigate personal attitudes towards surveillance, facial detection and analysis technology in public space. The overall project aims to employ machine learning models that can distinguish personal information from publicly available data. The project entails the development of a front-end and back-end of a public website that integrates our machine learning models (via API), that provides public access to their output, and that captures public response. There is flexibility regarding the specific direction of the work that takes place. Indicative directions include: (1) Improve and expand the accuracy of an existing suite of machine learning models. These models distinguish personality traits from a single facial photo. (2) Explore and propose the integration of additional (public) datasets within an existing suite of machine learning models, and develop novel, interactive interfaces that display these data in public space. (3) Integrate interaction techniques from other SocialNUI/IDL projects (e.g. gaze) to inform the design of interactive interfaces that display output from the machine learning models in public space. (4) The first stage of this project entails the development of an interactive website that replicates our existing suite of machine learning models. Following the development, you will extend the suite with machine learning models for one or more additional datasets and integrate the functionality within the website. In the third stage, you will run a user study with a group of students or in a crowdsourcing environment, analyze feedback, and discuss results in a report.

    Expected background: Ability to conduct user studies, Knowledge of machine learning/AI, Programming (C#, web platforms, Rest APIs), Strong analytical skills, Independent decision-making.

Human and AI Interactions

Supervisor: Wally Smith

  • Deceptive Computing

    Computers are increasingly being used to influence people (e.g. the Facebook/Cambridge Analytica events) and future AI will likely have the ability to reason about how human's think and may be able to deceive people. In this project, you will conduct a review of current deceptive uses of computing, and/or will conduct an experiment to discover how people react to deceptive machines.

    Expected background: Ability to review literature; ability to conduct experiments with human users.

Wearable Technologies

Supervisor: Deepti Aggarwal

  • Developing Visualisation for a Smart Wearable Technology

    We are looking for a Masters student with expertise in iOS development to develop a web-interface for a smart wearable technology. The technology is designed to help physiotherapists in assessing patient’s lower body movements during video consultations. The technology consists of two parts: (1) a pair of socks with sensors embedded to capture data related to weight distribution, and (2) a web-interface that presents the captured data over-a-distance to physiotherapists. The student will be working on a pair of commercially available sensing socks having pressure sensors attached. The socks transmit sensor data about weight distribution patterns that can be accessed through the SDK. The tasks include developing an engaging web-interface by using the SDK, which works in a video consultation setting. This is suitable for a 25-credit project.

    For more details, contact Deepti Aggarwal on deepti.aggarwal@unimelb.edu.au

    Expected background: Mobile development (preferably iOS), web development (Javascript/NodeJS), and information visualisation.