Modelling Early Spatial Vision and its Influence on Eye Movements in Natural Scenes

DSpace Repository


Dokumentart: Dissertation
Date: 2018-08-02
Language: English
Faculty: 7 Mathematisch-Naturwissenschaftliche Fakultät
Department: Graduiertenkollegs
Advisor: Wichmann, Felix A. (Prof.)
Day of Oral Examination: 2018-07-24
DDC Classifikation: 150 - Psychology
Keywords: Visuelle Wahrnehmung , Modellierung
License: Publishing license including print on demand
Order a printed copy: Print-on-Demand
Show full item record


As we can only see well with a tiny part of our visual field we need to constantly move our eyes to perceive the world around us. Conversely, our eye movements need to be planned with the information we perceived before. Despite this bidirectional relationship, visual processing and eye movements are typically studied separately. To reunite these fields I design models to predict what we can discriminate and where we look simultaneously. I develop a image-computable spatial vision model, which generalizes classical detection and discrimination data to predict how well arbitrary images can be discriminated. This model fits the classical detection and discrimination data as well as more abstract models, fits natural image masking sensibly and additionally allows me to calculate an experimentally validated internal representation of the stimuli used in eye movement research. Next, I develop statistical methods to evaluate dynamical eye movement models based on direct evaluation of the likelihood of the measured data. These methods are applicable to essentially all eye movement models and provide a solid base for fitting, evaluating and comparing these models. Furthermore, these methods allow Bayesian inference for model parameters and hierarchical models with different parameters for different subjects. Finally, I use the early spatial vision model and the improved evaluation techniques to predict a fixation density from the internal representation generated by the early spatial vision model. Comparing these predictions to other models over time enables me to separate the contributions of bottom-up, top-down, low-level and high-level factors. The combination of my fixation density model with the existing SceneWalk model for the eye movement dynamics results in a mechanistically plausible model which predicts both eye movement and discrimination experiments. Building on the foundations I made, future research might extend my model to include higher level processing, to include more dependencies within scanpaths and to include a peripheral decline in visual processing to further expand our understanding of eye movements and visual perception.

This item appears in the following Collection(s)