Deep CNN-Based Super-Resolution Using External and Internal Examples
on the submission to IEEE Signal Processing Letters
|Junyoung Cheong†||In Kyu Park†|
† Inha University, Korea
(SISR) method using deep convolutional neural network
(CNN) shows the outstanding performance than previous
SISR methods. However, advantage of using external and internal
examples jointly has not been fully investigated on deep CNN
framework. In this letter, we present a novel method for single
image super-resolution by exploiting a complementary relation
of external and internal example-based SISR methods. The
proposed deep CNN model consists of two subnetworks: global
residual network and self-residual network to utilize the advantages
of external and internal examples, respectively. Compared
with conventional joint SISR methods, the proposed method is
the first deep CNN-based SISR method without inefficient retraining
process. The proposed method outperforms the existing
methods in both quantitative and qualitative evaluations.
|Paper [PDF soon]||Source Code [ZIP soon]|
We evaluate the performance of the proposed algorithm and compare it against several state-of-the-art SISR methods. The HR results of the proposed method are reconstructed on an Intel Core i5 CPU (3.2
GHz) with 12GB RAM and an NVIDIA Titan GPU. The evaluation is performed on datasets of various natural and urban scenes.
The proposed method predicts only single-luminance channels and the results from Huang et al.’s webpage.
The proposed method is implemented using the MatConvNet deep learning toolbox, and takes approximately 10 hours to train. We also show that the proposed method can be used with other internal example-driven SISR methods to improve
Scale factor x2
Scale factor x3
Scale factor x4
Scale factor x8
The proposed method yields the best quantitative results for natural image dataset (Set5, Set14, BSD100), slightly better than those of VDSR, which has been known as the best algorithm to-date.
However, the Urban 100 dataset contains many repetitive HR structures at various scales, which provide the best condition for the proposed method (0.3dB better than the second best: VDSR).
|Proposed + Glasner||–||–||–||–||–||–||31.39 (1.11)||28.05 (0.99)||26.73 (0.56)||29.75 (1.90)||24.43 (0.85)|
|Proposed + Sub-band||–||–||–||–||–||–||31.82 (1.09)||28.66 (0.78)||27.19 (0.59)||30.37 (2.03)||25.01 (0.82)|
|Proposed + SelfExSR||37.48||33.78||31.42||32.92||29.78||28.02||31.91 (0.73)||28.85 (0.55)||27.31 (0.46)||30.58 (1.20)||25.38 (0.56)|
 J. Kim, J. K. Lee, and K. M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proc. of IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 1646–1654.
 J.-B. Huang, A. Singh, and N.Ahuja, “Single image super-resolution from transformed self-exemplars,” in Proc. of IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 5197–5206.
 C. Dong, C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in Proc. of European Conference on Computer Vision, 2014, pp. 184–199.
 D. Glasner, S. Bagon, and M. Irani, “Super-resolution from a single image,” in Proc. of IEEE International Conference on Computer Vision, 2009, pp. 349–356.
 A. Singh and N. Ahuja, “Super-resolution using sub-band self-similarity,” in Proc. of Asian Conference on Computer Vision, 2014, pp. 552–568.
 A. Vedaldi and K. Lenc, “MatConvnet – Convolutional Neural Networks for MATLAB,” in Proc. of the ACM International Conference on Multimedia, 2015, pp. 689–692.