Bayesian region selection for adaptive dictionary-based Super-Resolution
Eduardo Pérez-Pellitero (Technicolor)
Jordi Salvador (Technicolor)
Javier Ruiz-Hidalgo (Universitat Politècnica de Catalunya)
Bodo Rosenhahn (Leibniz Universität Hannover)
Proceedings of the British Machine Vision Conference 2013
Detailed results can be found here.
Abstract
The performance of dictionary-based super-resolution (SR) strongly depends on the contents of the training dataset. Nevertheless, many dictionary-based SR methods randomly select patches from of a larger set of training images to build their dictionaries, thus relying on patches being diverse enough. This paper describes a dictionary building method for SR based on adaptively selecting an optimal subset of patches out of the training images. Each training image is divided into sub-image entities, named regions, of such a size that texture consistency is preserved and high-frequency (HF) energy is present. For each input patch to super-resolve, the best-fitting region is found through a Bayesian selection. In order to handle the high number of regions in the training dataset, a local Naive Bayes Nearest Neighbor (NBNN) approach is used. Trained with this adapted subset of patches, sparse coding SR is applied to recover the high-resolution image. Experimental results demonstrate that using our adaptive algorithm produces an improvement in SR performance with respect to non-adaptive training.
BibTeX
@inproceedings { PerezPellitero2013,author = {P\'erez-Pellitero, E. and Salvador, J. and Ruiz-Hidalgo, J. and Rosenhahn, B.},
title = {{Bayesian region selection for adaptive dictionary-based Super-Resolution}},
booktitle = {Proc. British Machine Vision Conf.},
year = {2013},
}