Finding Structure in Silence: A distributed, discriminative approach to structure and representation in spoken communication

DSpace Repositorium (Manakin basiert)


Dateien:

Zitierfähiger Link (URI): http://hdl.handle.net/10900/142216
http://nbn-resolving.de/urn:nbn:de:bsz:21-dspace-1422164
http://dx.doi.org/10.15496/publikation-83563
Dokumentart: Dissertation
Erscheinungsdatum: 2025-05-09
Sprache: Englisch
Fakultät: 5 Philosophische Fakultät
Fachbereich: Allgemeine u. vergleichende Sprachwissenschaft
Gutachter: Francke, Michael (Prof. Dr.)
Tag der mündl. Prüfung: 2023-05-09
DDC-Klassifikation: 400 - Sprache, Linguistik
Freie Schlagwörter:
speaker alignment, signal/noise discrimination, information structure, linguistic distributions
Lizenz: http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=de http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=en
Gedruckte Kopie bestellen: Print-on-Demand
Zur Langanzeige

Inhaltszusammenfassung:

Dissertation steht unter Embargo bis 9. Mai 2025 !

Abstract:

Talk is a fundamental human experience, and speech signals are the first structured source of information we encounter in our environment. Yet the fact that talking is such a common and seemingly effortless activity easily obscures the challenges involved in explaining how it works. At a closer look, however, speech may be one of the most complex behaviors humans exhibit. It involves fine-grained kinetic adaptations that unfold in real-time and result in predictable, intelligible signals. Speech evolves through sensitivity to sensory prediction errors from multiple sources. It involves both co-ordinating one's own behavior (the way speech signals are articulated in context) and modeling of mutual expectations in time. All of these mechanisms rest on learning and develop across the lifespan. Given that speakers learn from exposure to speech samples that vary with experience, how do they ever manage to maintain sufficiently similar models of expectations? In this dissertation, I investigate the sources of information (signals) that allow speakers to co-ordinate their expectations and successfully communicate. I show that regular patterns of co-occurrence between speech forms at various levels of abstraction serve as context to structure and manage the uncertainties of communication while gradually increasing the rate at which fine-grained differences in articulated signals are perceived. I propose that this leads to a predictable ebb and flow of uncertainty that allows us to maintain mutually predictable time templates and, therefore, a distributed transmission process that is, to some degree, experience-independent. This work aims to explain how structure and form emerge from human vocal signals and to answer a fundamental question about the nature of the stable temporal organization of signals produced in human communication. It applies statistical and computational tools to transcripts and recordings of speech, exploring how speaker experience shapes speech structure at various levels of description. The dissertation links topics in learning theory, measurement and information theory, linguistics, philosophy, cognitive science, and neuroscience. It provides insight into the dynamics of alignment and the role of coordination in efficient information transmission.

Das Dokument erscheint in: