跳至主導覽 跳至搜尋 跳過主要內容

Vision-Based Robotic Object Grasping—A Deep Reinforcement Learning Approach

研究成果: Article同行評審

34   連結會在新分頁中開啟 引文 斯高帕斯(Scopus)

摘要

This paper focuses on developing a robotic object grasping approach that possesses the ability of self-learning, is suitable for small-volume large variety production, and has a high success rate in object grasping/pick-and-place tasks. The proposed approach consists of a computer vision-based object detection algorithm and a deep reinforcement learning algorithm with self-learning capability. In particular, the You Only Look Once (YOLO) algorithm is employed to detect and classify all objects of interest within the field of view of a camera. Based on the detection/localization and classification results provided by YOLO, the Soft Actor-Critic deep reinforcement learning algorithm is employed to provide a desired grasp pose for the robot manipulator (i.e., learning agent) to perform object grasping. In order to speed up the training process and reduce the cost of training data collection, this paper employs the Sim-to-Real technique so as to reduce the likelihood of damaging the robot manipulator due to improper actions during the training process. The V-REP platform is used to construct a simulation environment for training the deep reinforcement learning neural network. Several experiments have been conducted and experimental results indicate that the 6-DOF industrial manipulator successfully performs object grasping with the proposed approach, even for the case of previously unseen objects.

原文English
文章編號275
期刊Machines
11
發行號2
DOIs
出版狀態Published - 2023 2月

All Science Journal Classification (ASJC) codes

  • 控制與系統工程
  • 電腦科學(雜項)
  • 機械工業
  • 控制和優化
  • 工業與製造工程
  • 電氣與電子工程

指紋

深入研究「Vision-Based Robotic Object Grasping—A Deep Reinforcement Learning Approach」主題。共同形成了獨特的指紋。

引用此