Abstract:
===Benchmarking blind deconvolution algorithms===
We have built a dataset, that made it possible to compare seven state-of-the-art deblurring algorithms. We have recorded the 6D motion trajectories of the camera from several subjects and played it back on a Hexapod under the same laboratory conditions. The benchmark dataset contains 48 images: four scenes, which were captured with a SLR camera, which, for each scene, was moved with the same twelve motion trajectories. The results were statistically analyzed, using four different image quality metrics. It was possible to show that the algorithm by Xu et al. [139] was on average able to output the best results on the dataset, comparing all 48 images and considering all image quality metrics.
===Inpainting using a multi-layer perceptron ===
Inpainting is the task of completing missing pixels in an image with in a reasonable way. We show that a pure learning based approach is able to learn the inpainting task. The method we used was a multi-layer perceptron, where the input was a corrupted image patch and the corresponding mask of the same image location. The mask specifies which pixels are missing. The MLP was trained on image patches, which were generated by using images from the
ImageNet dataset [27]. We show that the achieved results are better, compared in the PSNR metric, than state-of-the-art inpainting algorithms. In addition we show that it is also possible to train a multi-layer perceptron without the mask as input. The achieved results are, as expected, not as good, but still visually appealing.
=== Depth estimation from light field images ===
Light field photography can be regarded as a generalization of stereo photography. A salient feature of the light field images is the emerging of lines in the so-called epipolar plane images (EPI). The slope of those lines is inversely proportional to the distance object - camera. The presented depth estimation algorithm solves an optimization problem to estimate the slopes of those lines. As the result of the optimization is noisy, we use an additional regularization
term, which enforces the pixels with similar structure and color in the RGB image to have a similar depth value. By using this regularization term the depth maps have more sharp object boundaries, which coincide with the object boundaries in the RGB image. A comparison with state-of-the-art depth estimation algorithms shows that our results are comparable and additionally show finer structures in the depth map.