Content-adaptive Depth Map Enhancement Algorithm Based on Motion Distribution

  • 李 柏勳

Student thesis: Master's Thesis

Abstract

This thesis proposes a motion-based content-adaptive depth map enhancement algorithm to enhance the quality of the depth map and reduce the artifacts in synthesized virtual views A depth cue is extracted from the motion distribution at one specific moving camera scenario In the camera horizontal panning scenario the nearer the distance between the object and the camera the larger the motion will be and vice versa The relative distances between the camera and objects will be obtained from the motion distribution in this scenario Moreover the distance between a moving object and the camera should be similar and consistent in either camera-fixed or camera-panning scenarios Thus depth values of the moving object should not be intense variant This thesis also provides the bi-directional motion-compensated infinite impulse response depth low-pass filter to enhance the consistency of depth maps in the temporal domain The contribution of this thesis uses these depth cues and motion distribution to enhance stability and consistency of depth maps in the spatial-temporal domain Experimental results show that the synthesized results would be better in both objective and subjective measurement when compared with the synthesized results using original depth maps and related depth map enhancement algorithms
Date of Award2014 Sept 10
Original languageEnglish
SupervisorGwo-Giun Lee (Supervisor)

Cite this

'