3D point cloud classification for autonomous driving via dense-residual fusion network

Chung Hsin Chiang, Chih Hung Kuo, Chien Chou Lin, Hsin Te Chiang

研究成果: Article同行評審

摘要

Compared with the state-of-the-art architectures, using the 3D point cloud as the input of the 2D convolutional neural network without preprocessing will restrict the feature expression of the network. To address this issue, we propose a high-precision classification network using bearing angle (BA) images, depth images, and RGB images. Due to the development of unmanned vehicles, determining how to recognize objects from the information collected by sensors is important. Our approach takes data from LiDAR and a camera and projects a 3D point cloud into 2D BA images and depth images. The RGB image captured by the camera is used to select the region of interest (ROI) corresponding to the point cloud. However, only adding input information is not enough to improve the classification ability of general convolutional neural networks. In our approach, we use a Dense-Residual Fusion Network (DRF-Net), which consists of Dense-Residual Blocks (DRBs). The Dense-Residual Fusion Network can achieve 97.92% accuracy with three input formats on a KITTI raw dataset.

原文English
頁(從 - 到)163775-163783
頁數9
期刊IEEE Access
8
DOIs
出版狀態Published - 2020

All Science Journal Classification (ASJC) codes

  • Computer Science(all)
  • Materials Science(all)
  • Engineering(all)

指紋 深入研究「3D point cloud classification for autonomous driving via dense-residual fusion network」主題。共同形成了獨特的指紋。

引用此