Occlusion-aware Depth Estimation Using Light-field Cameras
Ting-Chun Wang and Alexei (Alyosha) Efros and Ravi Ramamoorthi
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2015-222
December 1, 2015
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2015/EECS-2015-222.pdf
Consumer-level and high-end light-field cameras are now widely available. Recent work has demonstrated practical methods for passive depth estimation from light-field images. However, most previous approaches do not explicitly model occlusions, and therefore cannot capture sharp transitions around object boundaries. A common assumption is that a pixel exhibits photo-consistency when focused to its correct depth, i.e., all viewpoints converge to a single (Lambertian) point in the scene. This assumption does not hold in the presence of occlusions, making most current approaches unreliable precisely where accurate depth information is most important - at depth discontinuities.
In this paper, we develop a depth estimation algorithm that treats occlusion explicitly; the method also enables identification of occlusion edges, which may be useful in other applications. We show that, although pixels at occlusions do not preserve photo-consistency in general, they are still consistent in approximately half the viewpoints. Moreover, the line separating the two view regions (correct depth vs. occluder) has the same orientation as the occlusion edge has in the spatial domain. By treating these two regions separately, depth estimation can be improved. Occlusion predictions can also be computed and used for regularization. Experimental results show that our method outperforms current state-of-the-art light-field depth estimation algorithms, especially near occlusion boundaries.
Advisors: Ravi Ramamoorthi and Alexei (Alyosha) Efros
BibTeX citation:
@mastersthesis{Wang:EECS-2015-222, Author= {Wang, Ting-Chun and Efros, Alexei (Alyosha) and Ramamoorthi, Ravi}, Title= {Occlusion-aware Depth Estimation Using Light-field Cameras}, School= {EECS Department, University of California, Berkeley}, Year= {2015}, Month= {Dec}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2015/EECS-2015-222.html}, Number= {UCB/EECS-2015-222}, Abstract= {Consumer-level and high-end light-field cameras are now widely available. Recent work has demonstrated practical methods for passive depth estimation from light-field images. However, most previous approaches do not explicitly model occlusions, and therefore cannot capture sharp transitions around object boundaries. A common assumption is that a pixel exhibits photo-consistency when focused to its correct depth, i.e., all viewpoints converge to a single (Lambertian) point in the scene. This assumption does not hold in the presence of occlusions, making most current approaches unreliable precisely where accurate depth information is most important - at depth discontinuities. In this paper, we develop a depth estimation algorithm that treats occlusion explicitly; the method also enables identification of occlusion edges, which may be useful in other applications. We show that, although pixels at occlusions do not preserve photo-consistency in general, they are still consistent in approximately half the viewpoints. Moreover, the line separating the two view regions (correct depth vs. occluder) has the same orientation as the occlusion edge has in the spatial domain. By treating these two regions separately, depth estimation can be improved. Occlusion predictions can also be computed and used for regularization. Experimental results show that our method outperforms current state-of-the-art light-field depth estimation algorithms, especially near occlusion boundaries.}, }
EndNote citation:
%0 Thesis %A Wang, Ting-Chun %A Efros, Alexei (Alyosha) %A Ramamoorthi, Ravi %T Occlusion-aware Depth Estimation Using Light-field Cameras %I EECS Department, University of California, Berkeley %D 2015 %8 December 1 %@ UCB/EECS-2015-222 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2015/EECS-2015-222.html %F Wang:EECS-2015-222