Abstract
In the execution of material handling, the mobile manipulator is controlled to reach a station by its mobile base. This study adopts an uncalibrated eye-in-hand vision system to provide visual information for the manipulator to pick up a workpiece on the station. A novel vision-guided control strategy with a behavior-based look-and-move structure is proposed. This strategy is based on six image features, predefined by image moment method. In the designed neural-fuzzy controllers with varying learning rate, each image feature error is taken to generate intuitively one DOF motion command relative to the camera coordinate frame using fuzzy rules, which define a particular visual behavior. These behaviors are then fused to produce a final command action to perform grasping tasks using the proposed behavior fusion scheme. Finally, the proposed control strategy is experimentally applied to control the end-effector to approach and grasp a workpiece in various locations on a station.
Original language | English |
---|---|
Pages (from-to) | 94-102 |
Number of pages | 9 |
Journal | Artificial Life and Robotics |
Volume | 23 |
Issue number | 1 |
DOIs | |
Publication status | Published - 2018 Mar 1 |
All Science Journal Classification (ASJC) codes
- General Biochemistry,Genetics and Molecular Biology
- Artificial Intelligence