Texture-based depth frame interpolation for precise 2D-to-3D conversion

Kuan Ting Lee, En Shi Shih, Jar Ferr Yang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In general, if the 3D videos are represented by texture and its depth frames, the 3D multiview contents can be effectively produced by depth-image-based rendering (DIBR). In recent years, many researches are proposed to deal with the estimation of the depth map by using stereo images. For many 2D movies, the traditional depth cue methods are limited by specified scenery and achieve poor depth quality. In this paper, we proposed a precise depth map interpolation algorithm to estimate depth maps from two known depth maps as the depth keyframes and the color texture frames. After proper computation of superpixels, the proposed depth frame interpolation system contains texture-based depth estimation, error compensation, noise elimination, and forward/backward depth map merging steps. Simulations show that the proposed system can obtain high quality depth frame interpolation.

Original languageEnglish
Title of host publication26th International Display Workshops, IDW 2019
PublisherInternational Display Workshops
Pages157-160
Number of pages4
ISBN (Electronic)9781713806301
DOIs
Publication statusPublished - 2019
Event26th International Display Workshops, IDW 2019 - Sapporo, Japan
Duration: 2019 Nov 272019 Nov 29

Publication series

NameProceedings of the International Display Workshops
Volume1
ISSN (Print)1883-2490

Conference

Conference26th International Display Workshops, IDW 2019
Country/TerritoryJapan
CitySapporo
Period19-11-2719-11-29

All Science Journal Classification (ASJC) codes

  • Computer Vision and Pattern Recognition
  • Human-Computer Interaction
  • Electrical and Electronic Engineering
  • Electronic, Optical and Magnetic Materials

Fingerprint

Dive into the research topics of 'Texture-based depth frame interpolation for precise 2D-to-3D conversion'. Together they form a unique fingerprint.

Cite this