From Points to Probability Measures: Statistical Learning on Distributions with Kernel Mean Embedding

DSpace Repository


Dateien:

URI: http://hdl.handle.net/10900/67223
http://nbn-resolving.de/urn:nbn:de:bsz:21-dspace-672231
http://dx.doi.org/10.15496/publikation-8643
Dokumentart: PhDThesis
Date: 2015
Language: English
Faculty: 7 Mathematisch-Naturwissenschaftliche Fakultät
7 Mathematisch-Naturwissenschaftliche Fakultät
Department: Informatik
Advisor: Schölkopf, Bernhard (Prof. Dr.)
Day of Oral Examination: 2015-09-30
DDC Classifikation: 004 - Data processing and computer science
500 - Natural sciences and mathematics
510 - Mathematics
Keywords: Maschinelles Lernen
Other Keywords:
kernel methods
kernel mean embedding
statistical learning theory
reproducing kernel Hilbert space
support vector machine
support measure machine
kernel mean shrinkage estimators
distributional risk minimization
empirical risk minimization
License: http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=de http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=en
Order a printed copy: Print-on-Demand
Show full item record

Abstract:

The dissertation presents a novel learning framework on probability measures which has abundant real-world applications. In classical setup, it is assumed that the data are points that have been drawn independent and identically (i.i.d.) from some unknown distribution. In many scenarios, however, representing data as distributions may be more preferable. For instance, when the measurement is noisy, we may tackle the uncertainty by treating the data themselves as distributions, which is often the case for microarray data and astronomical data where the measurement process is imprecise and replication is often required. Distributions not only embody individual data points, but also constitute information about their interactions which can be beneficial for structural learning in high-energy physics, cosmology, causality, and so on. Moreover, classical problems in statistics such as statistical estimation, hypothesis testing, and causal inference, may be interpreted in a decision-theoretic sense as machine learning problems on empirical distributions. Rephrasing these problems as such leads to novel approach for statistical inference and estimation. Hence, allowing learning algorithms to operate directly on distributions prompts a wide range of future applications. To work with distributions, the key methodology adopted in this thesis is the kernel mean embedding of distributions which represents each distribution as a mean function in a reproducing kernel Hilbert space (RKHS). In particular, the kernel mean embedding has been applied successfully in two-sample testing, graphical model, and probabilistic inference. On the other hand, this thesis will focus mainly on the predictive learning on distributions, i.e., when the observations are distributions and the goal is to make prediction about the previously unseen distributions. More importantly, the thesis investigates kernel mean estimation which is one of the most fundamental problems of kernel methods. Probability distributions, as opposed to data points, constitute information at a higher level such as aggregate behavior of data points, how the underlying process evolves over time and domains, and a complex concept that cannot be described merely by individual points. Intelligent organisms have the ability to recognize and exploit such information naturally. Thus, this work may shed light on future development of intelligent machines, and most importantly, may provide clues on the true meaning of intelligence.

This item appears in the following Collection(s)