3D Object Model Aided RGBD-CNN Object Orientation Justification and Convolutional Autoencoder Grasping Points Generation Method

  • 張 凱傑

Student thesis: Doctoral Thesis


A 3D object grasping point learning system is proposed in this thesis which contains object coordinate construction and grasping point learning For constructing the object coordinate the pose of the object needs to be estimated A RGBD Convolutional Neural Network (RGBD-CNN) is proposed to classify the orientation type of the object An object model and the iterative closest point algorithm (ICP) are then applied to estimate the object pose Hence the object coordinate can be constructed in the end For learning object grasping region the normal vector images and depth image of the object are obtained first Then the grasping range of the end effector (palm) will be simulated on these images Finally Convolutional Autoencoder (CAE) is applied to encode physical characteristics of simulated palm image By comparing the features of simulated palm in the database through 3D KD-tree the proposed method can evaluate the grasping points Through integrating object coordinate and learnt grasping point the robot plans a suitable grasping point based on the appointed task It is worth mentioning that most of the researches only put emphasis on either object orientation justification or grasp points generation However this research considers the problem of object pose estimation and grasping points generation together Therefore the robot can real-time adapt to different task situations The first experiment shows that the robot is able to understand the spatial relationship between each object by object coordinate system In the second experiment the robot successfully puts the object from random pose to the assigned pose
Date of Award2020
Original languageEnglish
SupervisorTzuu-Hseng S. Li (Supervisor)

Cite this