Lightweight Robotic Grasping Model Based on Template Matching and Depth Image

研究成果: Article同行評審

摘要

This letter proposes a lightweight DNN model for robotic grasping applications with just 1.5 million architecture parameters. In the proposed model, the location of the target object is estimated using a pairwise template matching method, while the orientation of the object is predicted using depth images and a convolutional neural network (CNN). The feasibility of the proposed model is demonstrated both numerically and experimentally on a NVIDIA Jetson NX developer kit. The experimental results show that the grasping system achieves an accuracy of 96.3% and a running time of 125 ms when on 700 images. Moreover, when applied to practical grasping tasks on 20 unseen objects selected from the Cornell grasping dataset, the system achieves an accuracy of 92.5%, which is comparable to that of existing state-of-the-art methods reported in the literature.

原文English
頁(從 - 到)1
頁數1
期刊IEEE Embedded Systems Letters
DOIs
出版狀態Accepted/In press - 2022

All Science Journal Classification (ASJC) codes

  • 控制與系統工程
  • 電腦科學(全部)

指紋

深入研究「Lightweight Robotic Grasping Model Based on Template Matching and Depth Image」主題。共同形成了獨特的指紋。

引用此