Event-triggered Learning

DSpace Repository


Dateien:

URI: http://hdl.handle.net/10900/132076
http://nbn-resolving.de/urn:nbn:de:bsz:21-dspace-1320768
http://dx.doi.org/10.15496/publikation-73432
Dokumentart: PhDThesis
Date: 2022-09-22
Language: English
Faculty: 7 Mathematisch-Naturwissenschaftliche Fakultät
Department: Informatik
Advisor: Trimpe, Sebastian (Prof. Dr.)
Day of Oral Examination: 2022-08-30
License: http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=de http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=en
Order a printed copy: Print-on-Demand
Show full item record

Abstract:

Machine learning has seen many recent breakthroughs. Inspired by these, learningcontrol systems emerged. In essence, the goal is to learn models and control policies for dynamical systems. Dealing with learning-control systems is hard and there are several key challenges that differ from classical machine learning tasks. Conceptually, excitation and exploration play a major role in learning-control systems. On the one hand, we usually aim for controllers that stabilize a system with the goal of avoiding deviations from a setpoint or reference. However, we also need informative data for learning, which is often not the case when controllers work well. Therefore, there is a problem due to the opposing objectives of many control theoretical tasks and the requirements for successful learning outcomes. Additionally, change of dynamics or other conditions is often encountered for control systems in practice. For example, new tasks, changing load conditions, or different external conditions have a substantial influence on the underlying distribution. Learning can provide the flexibility to adapt the behavior of learning-control systems to these events. Since learning has to be applied with sufficient excitation there are many practical situations that hinge on the following problem: "When to trigger learning updates in learning-control systems?" This is the core question of this thesis and despite its relevance, there is no general method that provides an answer. We propose and develop a new paradigm for principled decision making on when to learn, which we call event-triggered learning (ETL). The first triggers that we discuss are designed for networked control systems. All agents use model-based predictions to anticipate the other agents’ behavior which makes communication only necessary when the predictions deviate too much. Essentially, an accurate model can save communication, while a poor model leads to poor predictions and thus frequent updates. The learning triggers are based on the inter-communication times (the time between two communication instances). They are independent and identically distributed random variables, which directly leads to sound guarantees. The framework is validated in experiments and leads to 70% communication savings for wireless sensor networks that monitor human walking. In the second part, we consider optimal control algorithms and start with linear quadratic regulators. A perfect model yields the best possible controller, while poor models result in poor controllers. Thus, by analyzing the control performance, we can infer the model’s accuracy. From a technical point of view, we have to deal with correlated data and work with more sophisticated tools to provide the desired theoretical guarantees. While we obtain a powerful test that is tightly tailored to the problem at hand, it does not generalize to different control architectures. Therefore, we also consider a more general point of view, where we recast the learning of linear systems as a filtering problem. We leverage Kalman filter-based techniques to derive a sound test and utilize the point estimate of the parameters for targeted learning experiments. The algorithm is independent of the underlying control architecture, but demonstrated for model predictive control. Most of the results in the first two parts critically depend on linearity assumptions in the dynamics and further problem-specific properties. In the third part, we take a step back and ask the fundamental question of how to compare (nonlinear) dynamical systems directly from state data. We propose a kernel two-sample test that compares stationary distributions of dynamical systems. Additionally, we introduce a new type of mixing that can directly be estimated from data to deal with the autocorrelations. In summary, this thesis introduces a new paradigm for deciding when to trigger updates in learning-control systems. Additionally, we develop three instantiations of this paradigm for different learning-control problems. Further, we present applications of the algorithms that yield substantial communication savings, effective controller updates, and the detection of anomalies in human walking data.

This item appears in the following Collection(s)