dc.contributor.advisor |
Black, Michael (Prof. Dr.) |
|
dc.contributor.advisor |
Geiger, Andreas (Prof. Dr.) |
|
dc.contributor.author |
Hassan, Mohamed |
|
dc.date.accessioned |
2023-04-14T09:46:35Z |
|
dc.date.available |
2023-04-14T09:46:35Z |
|
dc.date.issued |
2023-04-14 |
|
dc.identifier.uri |
http://hdl.handle.net/10900/139192 |
|
dc.identifier.uri |
http://nbn-resolving.de/urn:nbn:de:bsz:21-dspace-1391924 |
de_DE |
dc.identifier.uri |
http://dx.doi.org/10.15496/publikation-80539 |
|
dc.description.abstract |
In this thesis, we argue that the 3D scene is vital for understanding, reconstructing, and synthesizing human motion. We present several approaches which take the scene into consideration in reconstructing and synthesizing Human-Scene Interaction (HSI). We first observe that state-of-the-art pose estimation methods ignore the 3D scene and hence reconstruct poses that are inconsistent with the scene. We address this by proposing a pose estimation method that takes the 3D scene explicitly into account. We call our method PROX for Proximal Relationships with Object eXclusion. We leverage the data generated using PROX and build a method to automatically place 3D scans of people with clothing in scenes. The core novelty of our method is encoding the proximal relationships between the human and the scene in a novel HSI model, called POSA for Pose with prOximitieS and contActs. POSA is limited to static HSI, however. We propose a real-time method for synthesizing dynamic HSI, which we call SAMP for Scene-Aware Motion Prediction. SAMP enables virtual humans to navigate cluttered indoor scenes and naturally interact with objects. Data-driven kinematic models, like SAMP, can produce high-quality motion when applied in environments similar to those shown in the dataset. However, when applied to new scenarios, kinematic models can struggle to generate realistic behaviors that respect scene constraints. In contrast, we present InterPhys which uses adversarial imitation learning and reinforcement learning to train physically-simulated characters that perform scene interaction tasks in a physical and life-like manner. |
en |
dc.language.iso |
en |
de_DE |
dc.publisher |
Universität Tübingen |
de_DE |
dc.rights |
ubt-podok |
de_DE |
dc.rights.uri |
http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=de |
de_DE |
dc.rights.uri |
http://tobias-lib.uni-tuebingen.de/doku/lic_mit_pod.php?la=en |
en |
dc.subject.classification |
machine vision, computer graphics |
de_DE |
dc.subject.ddc |
004 |
de_DE |
dc.subject.other |
computer vision |
en |
dc.subject.other |
computer graphics |
en |
dc.subject.other |
human pose estimation |
en |
dc.subject.other |
human motion synthesis |
en |
dc.title |
Reconstruction and Synthesis of Human-Scene Interaction |
en |
dc.type |
PhDThesis |
de_DE |
dcterms.dateAccepted |
2023-02-10 |
|
utue.publikation.fachbereich |
Informatik |
de_DE |
utue.publikation.fakultaet |
7 Mathematisch-Naturwissenschaftliche Fakultät |
de_DE |
utue.publikation.noppn |
yes |
de_DE |