Stereo dense matching which plays a key role in 3D reconstruction still remains a challenging task in photogrammetry and computer vision In addition to block-based matching recent studies based on machine learning have achieved great progress in stereo dense matching by using deep convolutional neural networks (DCNN) In this paper a novel neural network called dual guided-diffusion network (Dual-GDNet) is proposed which utilizes not only left-to-right but right-to-left image matchings in the network design and training with a consistentization process to reduce the possibilities of mis-matching In addition suppressed regression is proposed to refine disparity estimation by removing unrelated information before regression to prevent ambiguous predictions on multi-peaks probability distributions The proposed Dual-GDNet can be applied to existing DCNN models for further improvement on disparity estimation To estimate the performance GA-Net is selected as the backbone and the model was evaluated on the stereo datasets including Scene Flow and KITTI 2015 Experimental results demonstrate the superior in terms of end-point-error > 1 pixel error rate and top-2 error of the proposed model compared with related models An improvement of 2-10% on Scene Flow and 2-8% on KITTI 2015 datasets were obtained
| Date of Award | 2020 |
|---|
| Original language | English |
|---|
| Supervisor | Chao-Hung Lin (Supervisor) |
|---|
Dual-GDNet: Dual Guided-diffusion Network for Stereo Image Dense Matching
瑞評, 王. (Author). 2020
Student thesis: Doctoral Thesis