Trusting as a Moral Act: Trustworthy AI and Responsibility

DSpace Repositorium (Manakin basiert)


Dateien:

Zitierfähiger Link (URI): http://hdl.handle.net/10900/164429
http://nbn-resolving.org/urn:nbn:de:bsz:21-dspace-1644296
http://dx.doi.org/10.15496/publikation-105758
Dokumentart: Dissertation
Erscheinungsdatum: 2025-04-15
Sprache: Englisch
Fakultät: 5 Philosophische Fakultät
Fachbereich: Philosophie
Gutachter: Spohn, Wolfgang (Prof. Dr.)
Tag der mündl. Prüfung: 2025-02-26
Freie Schlagwörter: Künstliche Intelligenz
Ethiks von KI
Ethics of AI
Moral responsibility
Philosophy of technology
Trust
Artificial Intelligence
Lizenz: http://tobias-lib.uni-tuebingen.de/doku/lic_ohne_pod.php?la=de http://tobias-lib.uni-tuebingen.de/doku/lic_ohne_pod.php?la=en
Zur Langanzeige

Abstract:

This thesis explores the concept of trust in the context of artificial intelligence, examining how it relates to other forms of trust, such as interpersonal and institutional trust, as well as to neighbouring notions like reliance. I argue that trust is fundamentally a relational concept grounded in moral responsibility, and that this perspective remains applicable even when the trusted party is an AI system. To articulate this view, I develop a belief-based disposition account of trust, according to which trusting involves a normative stance rooted in the belief that the trustee will act in a trustworthy manner. This account allows for trust in AI systems understood as socio-technical tools—systems whose trustworthiness depends not only on their technical capacities but also on how those capacities are represented by the people behind them. By weaving together philosophical analysis with practical considerations, the thesis offers a novel understanding of trust in AI—one that challenges the standard reliance–trust distinction and foregrounds the ethical significance of trusting relationships with non-human agents. The result is a refined perspective on both trust and moral responsibility, with important implications for how we design, assess, and relate to AI in high-stakes contexts.

Das Dokument erscheint in: