GANs schön kompliziert: Applications of Generative Adversarial Networks

DSpace Repositorium (Manakin basiert)

Zur Kurzanzeige

dc.contributor.advisor Macke, Jakob H. (Prof. Dr.)
dc.contributor.author Ramesh, Poornima
dc.date.accessioned 2023-01-03T16:30:54Z
dc.date.available 2023-01-03T16:30:54Z
dc.date.issued 2023-01-03
dc.identifier.uri http://hdl.handle.net/10900/135001
dc.identifier.uri http://nbn-resolving.de/urn:nbn:de:bsz:21-dspace-1350010 de_DE
dc.identifier.uri http://dx.doi.org/10.15496/publikation-76352
dc.description.abstract Scientific research progresses via model-building. Researchers attempt to build realistic models of real-world phenomena, ranging from bacterial growth to galactic motion, and study these models as a means of understanding these phenomena. However, making these models as realistic as possible often involves fitting them to experimentally measured data. Recent advances in experimental methods have allowed for the collection of large-scale datasets. Simultaneously, advancements in computational capacity have allowed for more complex model-building. The confluence of these two factors accounts for the rise of machine learning methods as powerful tools, both for building models and fitting these models to large scale datasets. In this thesis, we use a particular machine learning technique: generative adversarial networks (GANs). GANs are a flexible and powerful tool, capable of fitting a wide variety of models. We explore the properties of GANs that underpin this flexibility, and show how we can capitalize on them in different scientific applications, beyond the image- and text-generating applications they are well-known for. Here we present three different applications of GANs. First, we show how GANs can be used as generative models of neural spike trains, and how they are capable of capturing more features of these spike trains compared to other approaches. We also show how this could enable insight into how information about stimuli are encoded in the spike trains. Second, we demonstrate how GANs can be used as density estimators for extending simulation-based Bayesian inference to high-dimensional parameter spaces. In this form, we also show how GANs bridge Bayesian inference methods and variational inference with autoencoders and use them to fit complex climate models to data. Finally, we use GANs to infer synaptic plasticity rules for biological rate networks directly from data. We then show how GANs be used to test the robustness of the inferred rules to differences in data and network initialisation. Overall, we repurpose GANs in new ways for a variety of scientific domains, and show that they confer specific advantages over the state-of-the-art methods in each of these domains. en
dc.language.iso en de_DE
dc.publisher Universität Tübingen de_DE
dc.rights ubt-podok de_DE
dc.rights.uri http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=de de_DE
dc.rights.uri http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=en en
dc.subject.ddc 500 de_DE
dc.subject.other GANs en
dc.subject.other neuroscience en
dc.subject.other Bayesian inference en
dc.subject.other probabilistic machine learning en
dc.subject.other statistical models en
dc.title GANs schön kompliziert: Applications of Generative Adversarial Networks en
dc.type PhDThesis de_DE
dcterms.dateAccepted 2022-12-09
utue.publikation.fachbereich Informatik de_DE
utue.publikation.fakultaet 7 Mathematisch-Naturwissenschaftliche Fakultät de_DE
utue.publikation.noppn yes de_DE

Dateien:

Das Dokument erscheint in:

Zur Kurzanzeige