Implementation of Kinect-based 3D Vision and Cognition Learning System Based Object Grasping Control for Home Service Robots

論文翻譯標題: 植基於Kinect之3D影像建立與以認知學習系統為基礎之物件抓取控制策略於居家服務型機器人
  • 黃 翊倫

學生論文: Master's Thesis

摘要

This thesis proposes a method using 3D vision based on Kinect and the cognition learning system This method allows the home service robot to automatically adjust the posture of its hands when grasping an object The vision system includes an object recognition and a tracking system Feature detection of the object recognition system uses the Speeded-Up Robust Features (SURF) algorithm while the feature description uses Binary Robust Invariant Scalable Keypoints (BRISK) The vision tracking system uses the Tracking-Learning-Detection (TLD) which it not only can track the target object but also can learn the vision data and update the database in real time In the posture of the robotic arm it uses 2D vision obtained from Kinect and infrared rays to transform into 3D vision described as point cloud From the point cloud data we know the space information of the object Then more points are planned as candidates and the appropriate posture is decided from the palm limit and the information about whether or not there is an obstacle along the path when the fingers move In order to allow the robot to automatically adjust the posture when grasping the object we use two human thinking modes to design the cognition learning system These are the fast intuitive thinking System 1 and slow rational thinking System 2 as proposed in a book entitled “Thinking Fast and Slow” written by a psychologist Daniel Kahneman Finally the method proposed in this thesis is applied to the home service robot and is proven the feasibility by the experimental results
獎項日期2014 七月 22
原文English
監督員Tzuu-Hseng S. Li (Supervisor)

引用此

'