Here is a collection of Human-Computer Interaction (HCI) research projects and software development projects proposed by researchers in the Interaction Design Lab. These projects are available to all masters students and especially relevant to students undertaking the HCI stream of the MIT. For more details about a particular project, students should contact supervisors directly.
Eye Tracking and Gaze Interaction
Supervisor: Eduardo Velloso
- Gaze Interaction for Public Displays
Though interactive public displays offers a lot of exciting opportunities for retail and restaurants, users are often worried that touching them might be unhygienic and sometimes don’t even know that they are touch-sensitive. In this project, you will build a public screen controlled by the user’s eyes. Example applications include a smart menu for a coffee shop, a clothes selector for a fashion shop, etc. See an example of how Pizza Hut implemented it here: https://www.youtube.com/watch?v=HRFn32N7KFY Expected background: User interface design and implementation, C# programming
- Look and Touch Interaction for Touchscreens
Even though by now most people are used to multitouch technology, there is still a lot of room for expanding the available vocabulary of touch-based interactions. In this project, you will explore how to augment traditional touch gestures with eye tracking. The opportunities are endless! See an example here: https://www.youtube.com/watch?v=BuvgizcmQuk&t=3s Expected background: User interface design and implementation, C# programming
- Gaze-Reactive Magic Books for Children
- Gaze Interaction for Smart Watches
The goal of this project is to build prototypes of smart watch applications that the user can control by using solely their gaze. You will use a wearable eye tracker to monitor the user's eye movements and explore how they can be incorporated into the design of gaze interactions on a smart watch. See an example of our previous work here: https://www.youtube.com/watch?v=KEIgw5A0yfI Expected background: Strong programming skills, combining data analysis with human-computer interaction.
Gesture and Body
Supervisor: Eduardo Velloso
- Classifying gestures through time series analysis
Pen and finger gestures are becoming an important part of user interaction with new devices. Integrating application-specific gestures requires user interface prototypers to know much about gesture recognition to choose/design the right gesture for designing interaction with applications to achieve high user satisfaction in both ease of use and accuracy. In this project you explore the use of three most important metrics in time series analysis to classify the available gesture data; ShapeBased, DTW and Geometric. You will develop the right machine learning technique that can learn and detect gestures from user data. You will focus on 3 most important aspect of machine learning challenges: Learning speed, Classification Speed and Validity. Expected background: Required: Programming skill with Python, Familiarity with machine learning; Recommended: Familiarity with SciKit Learn
- Analysing Weight Lifting Exercises
In this project you will build a system for a smart weight training room. The system will include a Kinect sensor and a large screen that will assess the quality of an exercise performance. The project will consist of integrating machine learning algorithms into this 3D system to provide feedback to weight lifters. See an example here: https://youtu.be/G-cZ1OgwTbE Expected background: Machine learning, python, C#, Unity.
Crowd and Mobile Sensing
Supervisor: Jorge Goncalves
- Raspberry Pi BLE Beacon Scanner
This project entails developing software for Raspberry Pi that is able to scan the surrounding area for nearby BLE iBeacons. Information about scanned Bluetooth devices should then be stored locally, and sent to a server at certain intervals. Expected background: Programming (Python, Database Management), some knowledge of bash scripting.
- Comparison of Image-Based Emotion Detection SDKs
This projects entails the development of an Android application that will use the Affectiva Emotion (https://www.affectiva.com/product/emotion-sdk/) and the Amazon Rekognition (https://aws.amazon.com/rekognition/) SDKs to determine a person’s emotional state based on gathered images. This project is research oriented. Apart from software development skills (Android), the student will need to conduct a user study, analyse the collected data to compare the results from both SDKs, and write a scientific report.
- Improving data visualization through crowdsourcing
- Voice-based crowdsourcing system
This project involves the development of a voice-based crowdsourcing system using either an Alexa or a Google Home. It will include the implementation of tasks that are suitable to be completed using voice input, as well as the communication workflow between the user and the smart speaker. A user study should be conducted to evaluate the reliability of the system to complete voice-based crowdsourcing tasks.
- Investigating conformity in online chatting environments
Social conformity is a widely seen social phenomena which results in individuals changing their own opinions and judgements to agree with a contradicting group majorities. This project explores manifestations of social conformity in an online chatting environment. You are required to:
- Create a channel on Slack (https://slack.com/) connecting a group of participants to answer a series of multiple-choice questions (MCQ).
- Develop a chatbot on the Slack platform to control the flow of questions throughout the quiz.
- Allow participants to answer the questions privately, display results to the group and facilitate discussion among participants through the channel.
- Allow participants to change their initial answer if needed.
- Refer to existing literature to support design decisions of the study.
- Conduct pilot studies to test and refine the developed application.
- Beneficial: Experience in the Slack API, designing and conducting user studies.
- Mood Inference Literature Review
You are required to do an extensive literature review on mood inference. Mental wellbeing plays a profound role in people’s health and quality of life. Mood tracking using various technologies is an active research topic. A core challenge is how to accurately and reliably measure mood data with the help of various technology in-the-wild. The goal of the project is to create a thorough and detailed literature review on mood inference in-the-wild. The literature review should provide a detailed summary of all previous related research on the the topic, highlighting their strength and weakness. Expected background: Strong writing skills; The ability to synthesise concepts from the literature; Interest in the research topic, Independent decision making
Smartphones for Science
Supervisor: Vassilis Kostakos
- Web Application for Creating Smartphone Studies
- Visualisation Dashboard for Smartphone sensor data
This work will contribute to a global open-source project led by the University of Melbourne (http://www.awareframework.com). The overall project aims to make it easy to conduct experiments using smartphones, and to collect sensor data from smartphones. Your role will be to develop an application using R Shiny Dashboard to visualise smartphone sensor data stored in a MySQL server. You will work closely with scientists to identify the requirements for the visualisation tool. Then, you will implement the tool to visualise the sensor data in a way that is suitable for scientists. Your work will help a variety of scientists who are using this tool, including medical doctors, psychologists, epidemiologists, sociologists, education experts, and computer scientists. Expected background: Databases, scripting and data wrangling, some statistical or numerical analysis, ability to conduct interviews. Preferred background: Knowledge of R and Shiny Dashboard is preferred but not necessary.
- Android Visualisation App for Smartphone Sensor Data
This project involves the creation and evaluation of two small applications. Both applications will be evaluated, asking participants a set of questions throughout the day for a period of 2 weeks. The first application is a chat bot (e.g., Facebook messenger). The bot is configured to ask participants a set of questions at predefined timeslots through the chat application installed on participants phones. The second application is a native Android application, running in the background of the participants phones. The application will ask the same set of questions at predefined timeslots, but instead through the native Android or iOS application. Following a user study, you will compare the results of both applications. Interesting questions are for example: what is the difference in response time between the bot and the Android/iOS application, how many questions went unanswered between the bot and the application, etc. In order to answer these questions, you will need to store all relevant information (e.g., in an online database). Expected background: Programming mobile applications, Collecting and storing data in databases, Previous experience in programming chatbot is beneficial but not required, Previous experience in conducting user studies is beneficial but not required.
Ageing and Technology
Supervisor: Jenny Waycott
- Scoping review of emerging technologies in aged care
This project aims to produce a critical analysis of the current state-of-the-art of emerging technologies that are being used to enrich the lives of the oldest old (those aged over 80). Technologies like virtual reality, social robots, and gesture-based gaming are now being used in a range of aged care settings to provide social and emotional enrichment for older adults. In order to inform further development in this space, it is important to map current uses and collate existing evidence about the effectiveness of these interventions. This project will involve conducting a systematic literature review of scholarly research in this area. It would suit a 25-point MIS project. Expected background: Strong writing skills, interest in the topic, and ability to critically review and synthesise academic literature.
- Virtual reality in aged care
This project aims to identify current uses, benefits, and challenges of virtual reality as diversional therapy in aged care. There are now several vendors offering virtual reality experiences especially designed for people with dementia or people in advanced old age. However, there is limited scholarly research examining the opportunities and challenges associated with deploying virtual reality in residential aged care settings. This research will involve conducting surveys and interviews with aged care staff to determine how virtual reality is currently being used and to identify any ethical or social challenges that might prohibit its effectiveness in this sensitive setting. It would suit a 50-point MIS project Expected background: Strong writing skills, interest in the topic, knowledge of qualitative data collection and analysis methods.
- The design and use of social robots as companions for older adults
Social robots and robotic pets (e.g., Paro the seal) are now being used to provide companionship for people in advanced old age. This project aims to examine whether robotic companions can foster security and emotional wellbeing among older adults and investigates the ethical and social challenges associated with this emerging technology. The project could involve a systematic review to identify scholarly research on the ethical issues associated with deploying robotic companions; surveys or interviews with care providers; or an observational study of the robotic companion in use (subject to approval from the university’s ethics committee). It would suit a 50-point MIS project. Expected background: Strong writing skills, interest in the topic, knowledge of qualitative data collection and analysis methods.
- Designing for companionship
This project aims to understand the communication and companionship needs of older adults who live alone and to identify how those needs can be addressed through the design and use of new technologies. The project will involve analysing data from interviews with older adults and aged care providers about older adults’ companionship and communication needs, preparing design guidelines, and possibly developing low-fidelity prototypes for design concepts that respond to these guidelines. It would suit a 25-point MIS or MIT project. Expected background: Strong writing skills, interest in the topic, knowledge of qualitative data collection and analysis methods.
Context, Games, and Reading in VR
Supervisor: Tilman Dingler
- Face Race: A Competitive Bio-signal Game Using Grimaces and Facial Heat
The human face is one of the most expressive parts of our body. While it implicitly reveals our emotions and feelings, we use it to explicitly communicate through facial expressions and grimaces. While we commonly use such expressions in our conversa-on, as an expressive input mechanism it is highly underutilized in human-computer interaction. In this project, we will explore the use of standard and thermal cameras to create playful interactions through facial gestures and facial heat signatures in games and applications. The scope of the project covers the following: 1. Investigating facial interactions based on cameras and thermal sensing. 2. Design and implementation of facial gestures and different facial heat areas as an input modality. 3. Design and implementation of a simple, multiplayer game with facial and heat gestures being the main input. 4. Conducting a user study with 8 participants who evaluate the game with regard to its usability, novelty, and fun factor. The outcome of this project is a comprehensive literature review of the use of facial gestures in human-computer interaction, a software that senses a range of facial gestures and heat signatures, a game which utilizes that input, and a final report about the project. Expected background: Programming experience with a platform of choice (iOS, Android, C#, Objective-C, or Swift), 2D or 3D graphics programming.
- VR Books: Gaze Tracking and Adaptation of Reading Ambience in VR
Reading is one of the most common and prominent ways to acquire knowledge but is also taken up as a leisure activity. While text tends to lead a rather static life on paper pages and screens, virtual reality (VR) allows us to adapt the reading ambiance according to the text content and underlying mood. In this project, we will use a FOVE VR headset, which allows us to track the user’s gaze in VR. Hence, the system knows the current text posi7on and can adjust the virtual environment (background visuals and sounds) accordingly in order to create an immersive reading experience. The scope of the project covers the following: 1. Investigating gaze interaction in VR as well as User Interfaces for reading. 2. Design and implementation of an adaptable (visuals and sounds) reading room in VR. 3. Implementation of a text reading interface, which uses eye gaze tracking to determine the reader’s text position and triggers changes in the environment. 4. Conducting a user study with 8 participants who evaluate the reading experience with regard to aspects, such as comprehension, likeability, and immersion. The outcome of this project is the design and implementation of a VR application that uses eye gaze tracking to adjust the ambiance to the currently read content. A report summarizing the development process, the user study, and its findings will be required as a final deliverable. Expected background: Experience with Unity is highly recommendable.
- Mobile Toolkit to Assess the Effect of Usage Context on Smartphone Interaction
This project aims to explore the effects of context on interaction with smartphones in everyday life. Contextual factors, such as ambient noise, users’ stress levels, and mood affect how people interact with their mobile devices. Collecting data about usage context can, therefore, be used to 1) build context detection algorithms and subsequently 2) inform smarter interfaces to accommodate them. We will provide an existing mobile toolkit to collect ground truth on interaction performance, which will need to be integrated into an app that triggers the 3-task battery (touch accuracy, visual search, and a typing task) at different times of the day. The scope of this project covers the following: 1. Development of an Android app for collecting context data, such as ambient noise, lighting, and app usage, using smartphone sensors. 2. Implementation of a notification scheduler to remind users to complete the task battery at different times of the day. 3. Implementation of a local storage to save the context (sensor data) and task performance data on the device 4. Implementing a transmission protocol to a logging server, which takes care of sending the collected data when connected to WiFi (the server itself along with data logging service will be provided). 5. Conducting a user study with 12 participants who install the app on their device to collect data for later analysis over the course two weeks. You will work on application development in close collaboration with the supervisors. The final deliverable of this project is a working software and a report. Expected background: Programming (Android, server communication), Independent decision making.
- A Desensitized Keylogging Framework for Studies in-the-wild
People’s alertness, attention, and vigilance are highly variable and subject to systematic changes across the day. These fluctuations—in part caused by circadian rhythms—impact higher level cognitive capacities, including perception, memory, and executive functions. Current computer systems rarely take these fluctuations into account and often overburden or bore the user as a result. To assess the diurnal rhythms of alertness and associated changes in cogntive functioning, this project aims at building a series of keyloggers to track people’s typing behavior across the day. Typing speed and error rates can be used as predictors of alertness and fatigue, so a system that monitors users’ typing behavior is capable of unobtrusively detecting moments of high and low user alertness. The keylogger will collect typing characteristics that are sensitive to user’s privacy, hence we will investigate a number of metrics that can safely be stored and transmitted to a server for logging purposes without compromising the typed content. Expected background: Programming experience with the respec(ve plaeorm (iOS, Android, C#, Objec(ve-C, or SwiL), Ability to implement an HTTP POST request sending JSON data to a server.
Supervisor: George Buchanan
- Placemarking in Library Browsing
This project aims to help us better understand how people keep track of multiple books when browsing library shelves. The work expands on research being done at the State Library of Victoria by Dana McKay, and there are some existing ideas as a starting place for understanding what is happening. You would have to conduct observations of library users as they browse, and analyse their behaviour afterwards, to see what the most common patterns are. Would suit a 25 or 50 credit MIS project. Expected background: Qualitative research, understanding of observation methods, good writing skills.
- Analysing BookCrossing Data
BookCrossing is a platform for exchanging books in public places : books are left in places where they can be found and readers record which books they read. After a book is read, it should be left in another place to be discovered by other readers. There is publicly available data from 2004 on when and where books were read. This can be cross-referenced with library data, e.g. data from an organization called OCLC, to understand patterns of reading and use of the bookcrossing books. Would suit a 25 or potentially 50 credit project. Expected background: Data analysis, quantitative evaluation, scripting.
- Developing A Virtual Bookshelf
Virtual bookshelves reproduce books on library or study bookshelves, with an image of a shelf filled with books. There are some previous ones, but now it is possible to create one from everyday components. The aim would be to develop an interactive virtual shelf using Amazon book cover data and other features of the Amazon API. Would suit a 25-credit project. Expected background: Good coding experience on e.g. C#, Objective C or Java; relevant GUI programming knowledge.
- Ebook vs Print Book Usage
There are various sets of data on ebook and print book usage from public and university libraries that are readily available. We don’t understood is what the differences in use between print and electronic book collections are. If we did, we could better understand shifts in general behaviour, and plan for future needs. In this project you will analyse some of the data to describe the differences and refer to the literature to understand what the consequences of your findings are. Could be a 25 or, ideally, a 50 credit project. Expected background: Quantitative evaluation, data analysis, scripting.
Supervisor: Hasan Ferdous
- Augmented Studio
This project builds on building an integrated platform for our “Augmented Studio” project. In its current form, Augmented Studio uses projection mapping to project anatomical information (muscles, skeleton, blood circulation) on human body live; it also shows the same information in a screen. In this project, we aim to develop a tablet interface, a web interface, and a virtual reality interface to show and interact with the projected information. We will develop a client-server architecture for the system to ensure scalability and reduce delay to make it suitable for classroom environment. Expected background: Strong programming skills (C#), Experience with Unity platform, app development for mobile platform.
Supervisor: Niels Wouters
- Understanding public perception towards artificial intelligence
In this project, we investigate personal attitudes towards surveillance, facial detection and analysis technology in public space. The overall project aims to employ machine learning models that can distinguish personal information from publicly available data. The project entails the development of a front-end and back-end of a public website that integrates our machine learning models (via API), that provides public access to their output, and that captures public response. There is flexibility regarding the specific direction of the work that takes place. Indicative directions include: (1) Improve and expand the accuracy of an existing suite of machine learning models. These models distinguish personality traits from a single facial photo. (2) Explore and propose the integration of additional (public) datasets within an existing suite of machine learning models, and develop novel, interactive interfaces that display these data in public space. (3) Integrate interaction techniques from other SocialNUI/IDL projects (e.g. gaze) to inform the design of interactive interfaces that display output from the machine learning models in public space. (4) The first stage of this project entails the development of an interactive website that replicates our existing suite of machine learning models. Following the development, you will extend the suite with machine learning models for one or more additional datasets and integrate the functionality within the website. In the third stage, you will run a user study with a group of students or in a crowdsourcing environment, analyze feedback, and discuss results in a report. Expected background: Ability to conduct user studies, Knowledge of machine learning/AI, Programming (C#, web platforms, Rest APIs), Strong analytical skills, Independent decision-making.
Human and AI Interactions
Supervisor: Wally Smith
- Deceptive Computing
Computers are increasingly being used to influence people (e.g. the Facebook/Cambridge Analytica events) and future AI will likely have the ability to reason about how human's think and may be able to deceive people. In this project, you will conduct a review of current deceptive uses of computing, and/or will conduct an experiment to discover how people react to deceptive machines. Expected background: Ability to review literature; ability to conduct experiments with human users.
Supervisor: Deepti Aggarwal
- Developing Visualisation for a Smart Wearable Technology
Technology and Emotion
Supervisor: Greg Wadley
- Using Technology For Emotion Regulation
This project investigates how people use digital technologies to shape their emotional states. For example, listening to music can powerfully impact a listener's emotions, and it has long been known that people use music to regulate their emotional states. Recent digital platforms extend and refine this power by making an almost-unlimited selection of content ubiquitously available. This project will investigate how, where, when and why people use technologies to shape moods and emotions in daily life. You will use "in the wild" HCI methods such as interviews, diaries and experience sampling, as well as psychological measures, to study emerging practices in digital emotion regulation. You will be expected to recruit 5 to 10 participants: these may be fellow students or other cohorts. Expected background: Familiarity with HCI methods for understanding user experience.