Learning robust control policies for real robots

DSpace Repository


Dateien:

URI: http://hdl.handle.net/10900/129787
http://nbn-resolving.de/urn:nbn:de:bsz:21-dspace-1297879
http://dx.doi.org/10.15496/publikation-71149
Dokumentart: Dissertation
Date: 2022-07-27
Language: English
Faculty: 7 Mathematisch-Naturwissenschaftliche Fakultät
Department: Informatik
Advisor: Righetti, Ludovic (Prof. Dr.)
Day of Oral Examination: 2022-01-14
License: Publishing license including print on demand
Order a printed copy: Print-on-Demand
Show full item record

Abstract:

In this thesis we deal with the problem of using deep reinforcement learning to generate robust policies for real robots. We identify three key issues that need to be tackled in order to make progress along these lines. How to perform exploration in robotic tasks, with discontinuities in the environment and sparse rewards. How to ensure policies trained in simulation transfer well to real systems. How to build policies that are robust to environment variability we encounter in the real world. We aim to tackle these issues through three papers that are part of this thesis. In the first one, we present an approach for learning an exploration process based on data from previously solved tasks to aid in solving new ones. In the second, we show how learning variable gain policies can produce better performing solutions on contact sensitive tasks, as well as propose a way to regularize these policies to enable direct transfer to real systems and improve their interpretability. In the final work, we propose a two-stage approach that goes from simple demonstrations to robust adaptive behaviors that can be directly deployed on real systems.

This item appears in the following Collection(s)