Handling photographic imperfections and aliasing in augmented reality

DSpace Repository


Dateien:
Aufrufstatistik

URI: http://nbn-resolving.de/urn:nbn:de:bsz:21-opus-23844
http://hdl.handle.net/10900/48939
Dokumentart: Report (Bericht)
Date: 2006
Source: WSI ; 2006 ; 3
Language: English
Faculty: 7 Mathematisch-Naturwissenschaftliche Fakultät
Department: Sonstige - Informations- und Kognitionswissenschaften
DDC Classifikation: 004 - Data processing and computer science
Keywords: Erweiterte Realität <Informatik> , Mixed Reality , Computergraphik , Dreidimensionale Computergraphik , Rendering
Other Keywords: Photographische Imperfektionen , Aliasing , Bewegungsunschärfe , Bildrauschen
Photographic imperfections , aliasing , motion blur , image noise
License: Publishing license including print on demand
Order a printed copy: Print-on-Demand
Show full item record

Abstract:

In video see-through augmented reality, virtual objects are overlaid over images delivered by a digital video camera. One particular problem of this image mixing process is the fact that the visual appearance of the computer-generated graphics differs strongly from the real background image. In typical augmented reality systems, standard real-time rendering techniques are used for displaying virtual objects. These fast, but relatively simplistic methods create an artificial, almost "plastic-like" look for the graphical elements. In this paper, methods for incorporating two particular camera image effects in virtual overlays are described. The first effect is camera image noise, which is contained in the data delivered by the CCD chip used for capturing the real scene. The second effect is motion blur, which is caused by the temporal integration of color intensities on the CCD chip during fast movements of the camera or observed objects, resulting in a blurred camera image. Graphical objects rendered with standard methods neither contain image noise nor motion blur. This is one of the factors which makes the virtual objects stand out from the camera image and contributes to the perceptual difference between real and virtual scene elements. Here, approaches for mimicking both camera image noise and motion blur in the graphical representation of virtual objects are proposed. An algorithm for generating a realistic imitation of image noise based on a camera calibration step is described. A rendering method which produces motion blur according to the current camera movement is presented. As a by-product of the described rendering pipeline, it becomes possible to perform a smooth blending between virtual objects and the camera image at their boundary. An implementation of the new rendering methods for virtual objects is described, which utilizes the programmability of modern graphics processing units (GPUs) and is capable of delivering real-time frame rates.

This item appears in the following Collection(s)