The developed mobile manipulator is primarily composed of a mobile base, a robot manipulator and an eye-in-hand vision system. The material handling of a mobile manipulator has two stages: guiding the mobile base between stations, and picking up a workpiece from a station. Fast landmark recognition and obstacle detection based on color segmentation are proposed for path following, obstacle avoidance and mobile base positioning. Using the machine vision, a vision-based vector field histogram method is modified and applied to guide the mobile manipulator for obstacle avoidance. However, after the mobile manipulator arrives at the station, positioning errors of the mobile base and the non-horizontality of ground inevitably cause position and orientation errors of the mobile base relative to the station. A vision-guided control strategy with a behavior-based look-and-move structure is presented. This strategy is based on six predefined image features. In the designed neural fuzzy controllers, each image feature is taken to generate intuitively one degree of freedom motion command relative to the camera coordinate frame using fuzzy rules, which define a specific visual behavior. These behaviors are then combined and executed in turns to perform grasping tasks.