TY - GEN
T1 - Piles of Objects Detection for Grasping System Using Modified RGB-D MobileNetV3
AU - Lin, Bor Haur
AU - He, Wei
AU - Shih, Kai Jung
AU - Li, Chih Hung G.
AU - Lien, Jenn Jier James
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - In modern industrial and production environments, robots are playing an increasingly important role. The combination of machine vision and robotic arms has shown great advantages in various automated processes. Automatic grasping using robotic arms can effectively reduce labor costs, improve efficiency, and make management easier. With this trend, more and more applications are emerging, such as high-precision parts processing and logistics warehousing and transshipment. However, enabling robots to have grasping capabilities still faces challenges. Humans can easily perceive and grasp objects in space, but for robots, how to perceive the ever-changing shapes and postures of objects from environmental information, safety and flexibly adapt to specific rules (such as objects with offset centers of gravity must be grasped near the center of gravity, fragile positions of objects cannot be grasped), generate corresponding grasping strategies and maintain sufficient success rate is our research focus. This study uses deep learning methods to design an automated mechanical grasping system that combines RGB-D cameras, machine vision, and robotic arms. The proposed network architecture is to use MobileNetV3 to extract global features of color images and depth images, and finally generate the grasping strategy of the robotic arm, outputting the position and rotation angle of the object. Finally, test the accuracy and success rate of grasping in a real environment. The success rate of our method in Wood datasets and BinObjects datasets can be above 90%.
AB - In modern industrial and production environments, robots are playing an increasingly important role. The combination of machine vision and robotic arms has shown great advantages in various automated processes. Automatic grasping using robotic arms can effectively reduce labor costs, improve efficiency, and make management easier. With this trend, more and more applications are emerging, such as high-precision parts processing and logistics warehousing and transshipment. However, enabling robots to have grasping capabilities still faces challenges. Humans can easily perceive and grasp objects in space, but for robots, how to perceive the ever-changing shapes and postures of objects from environmental information, safety and flexibly adapt to specific rules (such as objects with offset centers of gravity must be grasped near the center of gravity, fragile positions of objects cannot be grasped), generate corresponding grasping strategies and maintain sufficient success rate is our research focus. This study uses deep learning methods to design an automated mechanical grasping system that combines RGB-D cameras, machine vision, and robotic arms. The proposed network architecture is to use MobileNetV3 to extract global features of color images and depth images, and finally generate the grasping strategy of the robotic arm, outputting the position and rotation angle of the object. Finally, test the accuracy and success rate of grasping in a real environment. The success rate of our method in Wood datasets and BinObjects datasets can be above 90%.
UR - http://www.scopus.com/inward/record.url?scp=85175264424&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85175264424&partnerID=8YFLogxK
U2 - 10.1109/ARIS59192.2023.10268480
DO - 10.1109/ARIS59192.2023.10268480
M3 - Conference contribution
AN - SCOPUS:85175264424
T3 - International Conference on Advanced Robotics and Intelligent Systems, ARIS
BT - 2023 International Conference on Advanced Robotics and Intelligent Systems, ARIS 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2023 International Conference on Advanced Robotics and Intelligent Systems, ARIS 2023
Y2 - 30 August 2023 through 1 September 2023
ER -