With the development in artificial intelligence, autonomous vehicle or mobile robot has gradually shown their potentials in various applications. Therefore, various contest platforms, such as Dockietown and AutoRace, have been developed for learning related techniques. This paper summarizes and demonstrates our efforts on developing various navigation and task planning missions for the AutoRace challenge, in which mobile robots must recognize a designated track, to reach the destination with minimum time and numbers of faults. In this work, a ROS-based mobile robot: Turtlebot Burger, is hired as the development platform and contest vehicle. In cooperation with cameras and a 2D lidar, various image processing techniques and deep learning algorithms are developed and coded for accomplishing the missions. The developed system could be split into three main parts, which are responsible for system management, image processing, and decision making for missions, respectively. The details of their relationships and the working flow will be further discussed in this paper. Also, a deep learning object detection model, YoloV4, is introduced to improve the detection of traffic signs. Finally, through the spent efforts, the entire mission can be smoothly accomplished within 2 to 3 minutes under normal situations. In the end, we will also discuss some problems caused by the limitations of hardware and system capability. Currently, efforts for improving the success rate in various situations are investigated to further reduce the time and increase stability. These results would be further studied for application in real-life technologies.