Abstract:
Due to recent technological innovations, virtual reality (VR) has become a promising technology for systematically investigating learning in more authentic scenarios while simultaneously providing a controllable experimental setting. When bridging the gap between standardized lab experiments and real-life phenomena, eye tracking can be a rich source of information. Eye tracking can be instrumental for assessing information processing and learning in VR, and analyzing visual attention through eye tracking can provide valuable insights for creating effective virtual learning environments.
However, investigating eye tracking in 3D environments also poses some challenges, including integrating head movement, acquiring gaze target information, and interpreting eye movements in relation to information processing and learning. This thesis addresses some of these challenges by proposing methodological and analytical solutions, including reliable measures of pupil diameter, gaze-ray casting, network analysis, gaze entropy, and machine-learning models. These approaches are used to measure information processing and learning through eye-tracking and explore the potential for modeling eye movements and visual attention.
First, two standardized virtual experiments focus on processing and encoding information with 3D objects and measuring reliable pupil diameter baselines in VR. Second, the visual attention distribution in a virtual classroom is analyzed to understand the effects of the classroom environment and different teaching events on the students. In the last step, gaze-based attention networks are utilized to study the effect of social-related behavior on visual
attention and learning in VR classrooms.
This work contributes to expanding knowledge of VR research in education science and
explores the possibilities of eye-tracking analysis in VR. The findings aim to offer insights into
information processing and learning in virtual environments and contribute to developing
effective virtual learning environments.