Reading in a digital world: building new reading experiences and measuring reading behaviour in-the-wild.
While writing and reading dates back to the 4th millennium BC, reading as a way of accessing information has evolved from a privilege reserved for an elite few to a skill practised by the majority of the population. Literacy levels have spiked to a point where today, more than 86% of the world’s population is considered literate. Reading techniques themselves have evolved from reading aloud to silent reading, and with the advent of the digital revolution, where, how, and what we read has significantly changed. The information age provides us with both opportunities and challenges, which change our reading behaviour. Various devices are now available for reading, and their mobility provides us with unprecedented opportunities to engage with text anytime, anywhere.
This research focuses on the challenges, best practices, and future directions of ubiquitous technologies to support reading activities. By developing new reading UI design principles for emerging technologies, deploying intelligent scheduling algorithms, and developing new measures for assessing the quality of reading sessions, we build systems that provide better readability, prioritise information gain over attention capture, and instill better reading habits in their users.
The project looks at mobile screens as well as mixed reality displays and utilises eye-tracking data and other sensors (eg, IMU, touch, camera, infrared) that give us insights into the reader’s state-of-mind. The increasing number and sensitivity of mobile sensors make it possible to directly study reading behaviour in the wild.
Further, mixed reality technologies allow readers to interact with text in novel ways. In virtual environments, reading surfaces, ambience, and dynamic content can be adjusted on-the-fly. In this project, we build and explore novel reading interfaces by building prototypes and assessing the effects on skim and in-depth reading.
Parts of this project is a collaboration with the Document Intelligence Lab at Adobe Research.
- Tilman Dingler, Lecturer in Human Computer Interaction
- Namrata Srivastava, Graduate researcher
- Difeng Yu, Graduate researcher
- Xiuge Chen, Graduate researcher
Chunxue Wei, Difeng Yu, and Tilman Dingler. Reading on 3D Surfaces in Virtual Environments. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 721–728. IEEE, 2020.
Tilman Dingler, Benjamin Tag, Sabrina Lehrer, and Albrecht Schmidt. Reading Scheduler: Proactive Recommendations to Help Users Cope with Their Daily Reading Volume, In Proceedings of the 17th International Conference on Mobile and Ubiquitous Multimedia, (MUM 2018). ACM, 239–244, Cairo, Egypt.
Tilman Dingler, Kai Kunze, and Benjamin Outram. VR Reading UIs: Assessing Text Parameters for Reading in VR, In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, (CHI EA '18). ACM, LBW094:1–LBW094:6, Montreal QC, Canada.
Tilman Dingler, Rufat Rzayev, Alireza Sahami Shirazi, and Niels Henze. Designing Consistent Gestures Across Device Types: Eliciting RSVP Controls for Phone, Watch, and Glasses, In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, (CHI '18). ACM, 419:1–419:12, Montreal QC, Canada.