TY - GEN
T1 - Content-adaptive depth map enhancement based on motion distribution
AU - Lee, Gwo Giun Chris
AU - Li, Bo Syun
AU - Chen, Chun Fu
PY - 2015/2/27
Y1 - 2015/2/27
N2 - This paper provides a motion-based content-adaptive depth map enhancement algorithm to enhance the quality of the depth map and reduce the artifacts in the synthesized views. The proposed algorithm extracts depth cues from the motion distribution at the specific scenario of camera movement to align the distribution of depth and motion. In real world scenarios, when the camera is panning in horizontal direction, the nearer distance between the object and the camera, the larger motion will be, and vice versa; therefore, we could interpret the depth from motion in this. Moreover, in the scenario of fixed camera, the depth cue from motion could be derived in the same approach, and the depth variation within one moving object shall be small. Hence, the depth values of moving object should not be rapidly changing. In addition, this paper also employs the bi-directional motion-compensated infinite impulse response low-pass filter to stabilize the consistency of depth maps over time. As a consequence, the algorithm so introduced not only aligns the depth map to depth cues from motion but also enhance stability and consistency of depth maps in the spatial-temporal domain. Experiment results via enhanced depth maps show that the synthesized results would be better in both objective and subjective measurement in comparison with the results using original depth maps and the state-of-the-art depth enhancement algorithms.
AB - This paper provides a motion-based content-adaptive depth map enhancement algorithm to enhance the quality of the depth map and reduce the artifacts in the synthesized views. The proposed algorithm extracts depth cues from the motion distribution at the specific scenario of camera movement to align the distribution of depth and motion. In real world scenarios, when the camera is panning in horizontal direction, the nearer distance between the object and the camera, the larger motion will be, and vice versa; therefore, we could interpret the depth from motion in this. Moreover, in the scenario of fixed camera, the depth cue from motion could be derived in the same approach, and the depth variation within one moving object shall be small. Hence, the depth values of moving object should not be rapidly changing. In addition, this paper also employs the bi-directional motion-compensated infinite impulse response low-pass filter to stabilize the consistency of depth maps over time. As a consequence, the algorithm so introduced not only aligns the depth map to depth cues from motion but also enhance stability and consistency of depth maps in the spatial-temporal domain. Experiment results via enhanced depth maps show that the synthesized results would be better in both objective and subjective measurement in comparison with the results using original depth maps and the state-of-the-art depth enhancement algorithms.
UR - http://www.scopus.com/inward/record.url?scp=84925425812&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84925425812&partnerID=8YFLogxK
U2 - 10.1109/VCIP.2014.7051611
DO - 10.1109/VCIP.2014.7051611
M3 - Conference contribution
T3 - 2014 IEEE Visual Communications and Image Processing Conference, VCIP 2014
SP - 482
EP - 485
BT - 2014 IEEE Visual Communications and Image Processing Conference, VCIP 2014
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2014 IEEE Visual Communications and Image Processing Conference, VCIP 2014
Y2 - 7 December 2014 through 10 December 2014
ER -