Robust Light Field Depth Estimation using Occlusion-Noise Aware Data Costs

Robust Light Field Depth Estimation using Occlusion-Noise Aware Data Costs

accepted in IEEE Trans. on Pattern Analysis and Machine Intelligence

Williem In Kyu Park Kyoung Mu Lee
Bina Nusantara University, Indonesia Inha University, Korea Seoul National University, Korea

Abstract

Depth estimation is essential in many light field applications. Numerous algorithms have been developed using a range of light field properties. However, conventional data costs fail when handling noisy scenes in which occlusion is present. To address this problem, we introduce a light field depth estimation method that is more robust against occlusion and less sensitive to noise. Two novel data costs are proposed, which are measured using the angular patch and refocus image, respectively. The constrained angular entropy cost (CAE) reduces the effects of the dominant occluder and noise in the angular patch, resulting in a low cost. The constrained adaptive defocus cost (CAD) provides a low cost in the occlusion region, while also maintaining robustness against noise. Integrating the two data costs is shown to significantly improve the occlusion and noise invariant capability. Cost volume filtering and graph cut optimization are applied to improve the accuracy of the depth map. Our experimental results confirm the robustness of the proposed method and demonstrate its ability to produce high-quality depth maps from a range of scenes. The proposed method outperforms other state-of-the-art light field depth estimation methods in both qualitative and quantitative evaluations.

Paper

TPAMI: PDF

CVPR: PDF

Code

ZIP

References

  1. V. Vaish, M. Levoy, R. Szeliski, C. L. Zitnick, and S. B. Kang, “Reconstructing occluded surfaces using synthetic apertures: Stereo, focus and robust measures,” in Proc. of IEEE Conference on Computer Vision and Pattern Recognition, 2006, pp. 2331–2338. 
  2. M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proc. of IEEE International Conference on Computer Vision, 2013, pp. 673–680.
  3. C. Chen, H. Lin, Z. Yu, S. B. Kang, and J. Yu, “Light field stereo matching using bilateral statistics of surface cameras,” in Proc. of IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1518–1525. 
  4. M. W. Tao, P. P. Srinivasan, J. Malik, S. Rusinkiewicz, and R. Ramamoorthi, “Depth from shading, defocus, and correspondence using light-field angular coherence,” in Proc. of IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1940–1948. 
  5. H. G. Jeon, J. Park, G. Choe, J. Park, Y. Bok, Y. W. Tai, and I. S. Kweon, “Accurate depth map estimation from a lenslet light field camera,” in Proc. of IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1547–1555. 
  6. T. C. Wang, A. A. Efros, and R. Ramamoorthi, “Occlusion-aware depth estimation using light-field cameras,” in Proc. of IEEE International Conference on Computer Vision, 2015, pp. 3487–3495. 
  7. H. Lin, C. Chen, S. B. Kang, and J. Yu, “Depth recovery from light field using focal stack symmetry,” in Proc. of IEEE International Conference on Computer Vision, 2015, pp. 3451–3459.
  8. S. Zhang, H. Sheng, C. Li, J. Zhang, and Z. Xiong, “Robust depth estimation for light field via spinning parallelogram,” Computer Vision and Image Understanding, vol. 145, pp. 148–159, 2016. 
  9. Williem and I. K. Park, “Robust light field depth estimation for noisy scene with occlusion,” in Proc. of IEEE Conference on Computer Vision and Patter Recognition, 2016, p. 4396–4404.