Toward Understanding Visual Perception in Machines with Human Psychophysics

DSpace Repository

Show simple item record

dc.contributor.advisor Bethge, Matthias (Prof. Dr.) Borowski, Judith 2022-12-02T09:57:27Z 2022-12-02T09:57:27Z 2022-12-02
dc.identifier.uri de_DE
dc.description.abstract Over the last several years, Deep Learning algorithms have become more and more powerful. As such, they are being deployed in increasingly many areas including ones that can directly affect human lives. At the same time, regulations like the GDPR or the AI Act are putting the request and need to better understand these artificial algorithms on legal grounds. How do these algorithms come to their decisions? What limits do they have? And what assumptions do they make? This thesis presents three publications that deepen our understanding of deep convolutional neural networks (DNNs) for visual perception of static images. While all of them leverage human psychophysics, they do so in two different ways: either via direct comparison between human and DNN behavioral data or via an evaluation of the helpfulness of an explainability method. Besides insights on DNNs, these works emphasize good practices: For comparison studies, we propose a checklist on how to design, conduct and interpret experiments between different systems. And for explainability methods, our evaluations exemplify that quantitatively testing widely spread intuitions can help put their benefits in a realistic perspective. In the first publication, we test how similar DNNs are to the human visual system, and more specifically its capabilities and information processing. Our experiments reveal that DNNs (1)~can detect closed contours, (2)~perform well on an abstract visual reasoning task and (3)~correctly classify small image crops. On a methodological level, these experiments illustrate that (1)~human bias can influence our interpretation of findings, (2)~distinguishing necessary and sufficient mechanisms can be challenging, and (3)~the degree of aligning experimental conditions between systems can alter the outcome. In the second and third publications, we evaluate how helpful humans find the explainability method feature visualization. The purpose of this tool is to grant insights into the features of a DNN. To measure the general informativeness and causal understanding supported via feature visualizations, we test participants on two different psychophysical tasks. Our data unveil that humans can indeed understand the inner DNN semantics based on this explainability tool. However, other visualizations such as natural data set samples also provide useful, and sometimes even \emph{more} useful, information. On a methodological level, our work illustrates that human evaluations can adjust our expectations toward explainability methods and that different claims have to match the experiment. en
dc.language.iso en de_DE
dc.publisher Universität Tübingen de_DE
dc.rights ubt-podok de_DE
dc.rights.uri de_DE
dc.rights.uri en
dc.subject.ddc 004 de_DE
dc.subject.other machine vision en
dc.subject.other human vision en
dc.subject.other deep learning en
dc.subject.other psychophysics en
dc.subject.other explainability en
dc.subject.other interpretability en
dc.title Toward Understanding Visual Perception in Machines with Human Psychophysics en
dc.type Dissertation de_DE
dcterms.dateAccepted 2022-10-24
utue.publikation.fachbereich Informatik de_DE
utue.publikation.fakultaet 7 Mathematisch-Naturwissenschaftliche Fakultät de_DE
utue.publikation.source Journal of Vision, 2021. ICLR, 2021. NeurIPS, 2021. Pacific Graphics, 2021. de_DE
utue.publikation.noppn yes de_DE


This item appears in the following Collection(s)

Show simple item record