Efficient Visual SLAM for Autonomous Aerial Vehicles

DSpace Repositorium (Manakin basiert)


Dateien:

Zitierfähiger Link (URI): http://hdl.handle.net/10900/76288
http://nbn-resolving.de/urn:nbn:de:bsz:21-dspace-762887
http://dx.doi.org/10.15496/publikation-17690
Dokumentart: Dissertation
Erscheinungsdatum: 2017-05
Sprache: Englisch
Fakultät: 7 Mathematisch-Naturwissenschaftliche Fakultät
Fachbereich: Informatik
Gutachter: Zell, Andreas (Prof. Dr.)
Tag der mündl. Prüfung: 2016-12-16
DDC-Klassifikation: 004 - Informatik
Schlagworte: Robotik , Bildverstehen , Maschinelles Sehen
Freie Schlagwörter:
Robotics
Computer Vision
SLAM
Visual Navigation
Micro Aerial Vehicle
Lizenz: http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=de http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=en
Gedruckte Kopie bestellen: Print-on-Demand
Zur Langanzeige

Abstract:

The general interest in autonomous or semi-autonomous micro aerial vehicles (MAVs) is strongly increasing. There are already several commercial applications for autonomous micro aerial vehicles and many more being investigated by both research institutes and multiple financially strong companies. Most commercially available applications, however, are rather limited in their autonomy: They rely either on a human operator or reliable reception of global positioning system (GPS) signals for navigation. Truly autonomous micro aerial vehicles that can also fly in GPS-denied environments such as indoors, in forests, or in urban scenarios, where the GPS signal may be blocked by tall buildings, clearly require more on-board sensing and computation potential. In this dissertation, we explore autonomous micro aerial vehicles that rely on a so-called RGBD camera as their main sensor for simultaneous localization and mapping (SLAM). Several aspects of efficient visual SLAM with RGBD cameras aimed at micro aerial vehicles are studied in detail within this dissertation: We first propose a novel principle of integrating depth measurements within visual SLAM systems by combining both 2D image position and depth measurements. We modify a widely-used visual odometry system accordingly, such that it can serve as a robust and accurate odometry system for RGBD cameras. Based on this principle we go on and implement a full RGBD SLAM system that can close loops and perform global pose graph optimization and runs in real-time on the computationally constrained onboard computer of our MAV. We investigate the feasibility of explicitly detecting loops using depth images as opposed to intensity images with a state of the art hierarchical bag of words (BoW) approach using depth image features. Since an MAV flying indoors can often see a clearly distinguishable ground plane, we develop a novel efficient and accurate ground plane detection method and show how to use this to suppress drift in height and attitude. Finally, we create a full SLAM system combining the earlier ideas that enables our MAV to fly autonomously in previously unknown environments while creating a map of its surroundings.

Das Dokument erscheint in: