Deep CNN-Based Super-Resolution Using External and Internal Examples

 

Deep CNN-Based Super-Resolution Using External and Internal Examples

on the submission to IEEE Signal Processing Letters

Junyoung Cheong In Kyu Park

 

† Inha University, Korea

 

 


 

 

Abstract

External example-driven single image superresolution
(SISR) method using deep convolutional neural network
(CNN) shows the outstanding performance than previous
SISR methods. However, advantage of using external and internal
examples jointly has not been fully investigated on deep CNN
framework. In this letter, we present a novel method for single
image super-resolution by exploiting a complementary relation
of external and internal example-based SISR methods. The
proposed deep CNN model consists of two subnetworks: global
residual network and self-residual network to utilize the advantages
of external and internal examples, respectively. Compared
with conventional joint SISR methods, the proposed method is
the first deep CNN-based SISR method without inefficient retraining
process. The proposed method outperforms the existing
methods in both quantitative and qualitative evaluations.

 
Paper Materials

Paper [PDF soon] Source Code [ZIP soon]

 

Experimental Results


We evaluate the performance of the proposed algorithm and compare it against several state-of-the-art SISR methods. The HR results of the proposed method are reconstructed on an Intel Core i5 CPU (3.2
GHz) with 12GB RAM and an NVIDIA Titan GPU. The evaluation is performed on datasets of various natural and urban scenes.
The proposed method predicts only single-luminance channels and the results from Huang et al.’s webpage[6].
The proposed method is implemented using the MatConvNet deep learning toolbox[7], and takes approximately 10 hours to train. We also show that the proposed method can be used with other internal example-driven SISR methods to improve
performance.

Qualitative results

Scale factor x2

Scale factor x3


Scale factor x4









Scale factor x8



 

Quantitative results


The proposed method yields the best quantitative results for natural image dataset (Set5, Set14, BSD100), slightly better than those of VDSR, which has been known as the best algorithm to-date.
However, the Urban 100 dataset contains many repetitive HR structures at various scales, which provide the best condition for the proposed method (0.3dB better than the second best: VDSR).

 

TABLE I: Quantitative evaluation (in terms of PSNR) conducted on datasets. Red indicates the
best and blue the second-best performance, and the numbers in bold indicate significant improvement over original performance.

  Set5 Set14 BSD100 Urban100
x2 x3 x4 x2 x3 x4 x2 x3 x4 x2 x4
SRCNN[3] 36.28 32.37 30.08 32.00 28.90 27.13 31.11 28.20 26.70 28.65 24.14
VDSR[1] 37.42 33.62 31.33 32.95 29.73 27.97 31.84 28.80 27.26 30.21 25.16
Glasner[4] 35.43 31.10 28.84 31.41 28.21 26.43 30.28 27.06 26.17 27.85 23.58
Sub-band[5] 30.73 27.88 26.60 28.34 24.19
SelfExSR[2] 36.50 32.62 30.33 32.23 29.16 27.40 31.18 28.30 26.85 29.38 24.82
Proposed + Glasner[4] 31.39 (1.11) 28.05 (0.99) 26.73 (0.56) 29.75 (1.90) 24.43 (0.85)
Proposed + Sub-band[5] 31.82 (1.09) 28.66 (0.78) 27.19 (0.59) 30.37 (2.03) 25.01 (0.82)
Proposed + SelfExSR[2] 37.48 33.78 31.42 32.92 29.78 28.02 31.91 (0.73) 28.85 (0.55) 27.31 (0.46) 30.58 (1.20) 25.38 (0.56)

References


[1] J. Kim, J. K. Lee, and K. M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proc. of IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 1646–1654.

[2] J.-B. Huang, A. Singh, and N.Ahuja, “Single image super-resolution from transformed self-exemplars,” in Proc. of IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 5197–5206.

[3] C. Dong, C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in Proc. of European Conference on Computer Vision, 2014, pp. 184–199.

[4] D. Glasner, S. Bagon, and M. Irani, “Super-resolution from a single image,” in Proc. of IEEE International Conference on Computer Vision, 2009, pp. 349–356.

[5] A. Singh and N. Ahuja, “Super-resolution using sub-band self-similarity,” in Proc. of Asian Conference on Computer Vision, 2014, pp. 552–568.

[6] https://github.com/jbhuang0604/selfexsr.
 


[7] A. Vedaldi and K. Lenc, “MatConvnet – Convolutional Neural Networks for MATLAB,” in Proc. of the ACM International Conference on Multimedia, 2015, pp. 689–692.