dc.contributor.advisor |
Schölkopf, Bernhard (Prof. Dr.) |
|
dc.contributor.author |
Simon-Gabriel, Carl-Johann |
|
dc.date.accessioned |
2019-03-27T06:41:30Z |
|
dc.date.available |
2019-03-27T06:41:30Z |
|
dc.date.issued |
2019-03-27 |
|
dc.identifier.other |
1662448775 |
de_DE |
dc.identifier.uri |
http://hdl.handle.net/10900/87256 |
|
dc.identifier.uri |
http://nbn-resolving.de/urn:nbn:de:bsz:21-dspace-872561 |
de_DE |
dc.identifier.uri |
http://dx.doi.org/10.15496/publikation-28642 |
|
dc.description.abstract |
Any binary classifier (or score-function) can be used to define a dissimilarity
between two distributions. Many well-known distribution-dissimilarities are
actually classifier-based: total variation, KL- or JS-divergence, Hellinger
distance, etc. And many recent popular generative modeling algorithms compute
or approximate these distribution-dissimilarities by explicitly training a
classifier: e.g. generative adversarial networks (GAN) and their variants.
This thesis introduces and studies such classifier-based
distribution-dissimilarities. After a general introduction, the first part
analyzes the influence of the classifiers' capacity on the dissimilarity's
strength for the special case of maximum mean discrepancies (MMD) and provides
applications. The second part studies applications of classifier-based
distribution-dissimilarities in the context of generative modeling and presents
two new algorithms: Wasserstein Auto-Encoders (WAE) and AdaGAN. The third and
final part focuses on adversarial examples, i.e. targeted but imperceptible
input-perturbations that lead to drastically different predictions of an
artificial classifier. It shows that adversarial vulnerability of neural network
based classifiers typically increases with the input-dimension, independently
of the network topology. |
en |
dc.language.iso |
en |
de_DE |
dc.publisher |
Universität Tübingen |
de_DE |
dc.rights |
ubt-podok |
de_DE |
dc.rights.uri |
http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=de |
de_DE |
dc.rights.uri |
http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=en |
en |
dc.subject.classification |
Maschinelles Lernen , Künstliche Intelligenz , Maschinelles Sehen , Lerntheorie , Statistik , Wahrscheinlichkeitsrechnung , Hilbert-Raum |
de_DE |
dc.subject.ddc |
004 |
de_DE |
dc.subject.ddc |
500 |
de_DE |
dc.subject.other |
Distances for Probability Distributions |
en |
dc.subject.other |
Divergences |
en |
dc.subject.other |
Generative Algorithms |
en |
dc.subject.other |
Generative Algorithmen |
de_DE |
dc.subject.other |
Adversarial Examples |
en |
dc.subject.other |
Gegnerische Beispiele |
de_DE |
dc.subject.other |
Divergenzen |
de_DE |
dc.subject.other |
Distanzen über Wahrscheinlichkeitsmaße |
de_DE |
dc.title |
Distribution-Dissimilarities in Machine Learning |
en |
dc.type |
PhDThesis |
de_DE |
dcterms.dateAccepted |
2018-12-17 |
|
utue.publikation.fachbereich |
Informatik |
de_DE |
utue.publikation.fakultaet |
7 Mathematisch-Naturwissenschaftliche Fakultät |
de_DE |