Efficient Visual SLAM for Autonomous Aerial Vehicles

DSpace Repository

Show simple item record

dc.contributor.advisor Zell, Andreas (Prof. Dr.)
dc.contributor.author Scherer, Sebastian Andreas
dc.date.accessioned 2017-05-17T07:05:09Z
dc.date.available 2017-05-17T07:05:09Z
dc.date.issued 2017-05
dc.identifier.other 488701554 de_DE
dc.identifier.uri http://hdl.handle.net/10900/76288
dc.identifier.uri http://nbn-resolving.de/urn:nbn:de:bsz:21-dspace-762887 de_DE
dc.identifier.uri http://dx.doi.org/10.15496/publikation-17690
dc.description.abstract The general interest in autonomous or semi-autonomous micro aerial vehicles (MAVs) is strongly increasing. There are already several commercial applications for autonomous micro aerial vehicles and many more being investigated by both research institutes and multiple financially strong companies. Most commercially available applications, however, are rather limited in their autonomy: They rely either on a human operator or reliable reception of global positioning system (GPS) signals for navigation. Truly autonomous micro aerial vehicles that can also fly in GPS-denied environments such as indoors, in forests, or in urban scenarios, where the GPS signal may be blocked by tall buildings, clearly require more on-board sensing and computation potential. In this dissertation, we explore autonomous micro aerial vehicles that rely on a so-called RGBD camera as their main sensor for simultaneous localization and mapping (SLAM). Several aspects of efficient visual SLAM with RGBD cameras aimed at micro aerial vehicles are studied in detail within this dissertation: We first propose a novel principle of integrating depth measurements within visual SLAM systems by combining both 2D image position and depth measurements. We modify a widely-used visual odometry system accordingly, such that it can serve as a robust and accurate odometry system for RGBD cameras. Based on this principle we go on and implement a full RGBD SLAM system that can close loops and perform global pose graph optimization and runs in real-time on the computationally constrained onboard computer of our MAV. We investigate the feasibility of explicitly detecting loops using depth images as opposed to intensity images with a state of the art hierarchical bag of words (BoW) approach using depth image features. Since an MAV flying indoors can often see a clearly distinguishable ground plane, we develop a novel efficient and accurate ground plane detection method and show how to use this to suppress drift in height and attitude. Finally, we create a full SLAM system combining the earlier ideas that enables our MAV to fly autonomously in previously unknown environments while creating a map of its surroundings. en
dc.language.iso en de_DE
dc.publisher Universität Tübingen de_DE
dc.rights ubt-podok de_DE
dc.rights.uri http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=de de_DE
dc.rights.uri http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=en en
dc.subject.classification Robotik , Bildverstehen , Maschinelles Sehen de_DE
dc.subject.ddc 004 de_DE
dc.subject.other Robotics en
dc.subject.other Computer Vision en
dc.subject.other SLAM en
dc.subject.other Visual Navigation en
dc.subject.other Micro Aerial Vehicle en
dc.title Efficient Visual SLAM for Autonomous Aerial Vehicles en
dc.type PhDThesis de_DE
dcterms.dateAccepted 2016-12-16
utue.publikation.fachbereich Informatik de_DE
utue.publikation.fakultaet 7 Mathematisch-Naturwissenschaftliche Fakultät de_DE


This item appears in the following Collection(s)

Show simple item record