Visual-Guided Robot Arm Using Multi-Task Faster R-CNN

Phong Phu Le, Van Thanh Nguyen, Shu Mei Guo, Ching Ting Tu, Jenn Jier James Lien

研究成果: Conference contribution

2 引文 斯高帕斯(Scopus)

摘要

The limitation of current visual recognition methods is a big obstacle for the application of automated robot arm systems into industrial projects, which require high precision and speed. In this work, we present a Faster RCNN based multi-task network, a deep neural network model, that is able to simultaneously perform three tasks including object detection, category classification and object angle estimation. Afterward, the outputs of all three tasks are utilized to decide a picking point and a rotated gripper angle for the pick-and-place robot arm system. The test results show that our network achieves a mean average precision of 86.6% at IoU (Intersection over Union) of 0.7, and a mean accuracy of 83.5% for the final prediction including object localization and angle estimation. In addition, the proposed multi-task network takes approximately 0.072 seconds to process an image, which is acceptable for pick-and-place robot arms.

原文English
主出版物標題Proceedings - 2019 International Conference on Technologies and Applications of Artificial Intelligence, TAAI 2019
發行者Institute of Electrical and Electronics Engineers Inc.
ISBN(電子)9781728146669
DOIs
出版狀態Published - 2019 十一月
事件24th International Conference on Technologies and Applications of Artificial Intelligence, TAAI 2019 - Kaohsiung, Taiwan
持續時間: 2019 十一月 212019 十一月 23

出版系列

名字Proceedings - 2019 International Conference on Technologies and Applications of Artificial Intelligence, TAAI 2019

Conference

Conference24th International Conference on Technologies and Applications of Artificial Intelligence, TAAI 2019
國家/地區Taiwan
城市Kaohsiung
期間19-11-2119-11-23

All Science Journal Classification (ASJC) codes

  • 人工智慧
  • 電腦科學應用
  • 人機介面

指紋

深入研究「Visual-Guided Robot Arm Using Multi-Task Faster R-CNN」主題。共同形成了獨特的指紋。

引用此