Deep Reinforcement Learning-Based Robot Exploration for Constructing Map of Unknown Environment

Shih Yeh Chen, Qi Fong He, Chin Feng Lai

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)


In traditional environment exploration algorithms, two problems are still waiting to be solved. One is that as the exploration time increases, the robot will repeatedly explore the areas that have been explored. The other is that in order to explore the environment more accurately, the robot will cause slight collisions during the exploration process. In order to solve the two problems, a DQN-based exploration model is proposed, which enables the robot to quickly find the unexplored area in an unknown environment, and designs a DQN-based navigation model to solve the local minima problem generated by the robot during the exploration. Through the switching mechanism of exploration model and navigation model, the robot can quickly complete the exploration task through selecting the modes according to the environment exploration situation. In the experiment results, the difference between the proposed unknown environment exploration method and the previous known-environment exploration methods research is less than 5% under the same exploration time. And in the proposed method, the robot can achieve zero collision and almost zero repeated exploration of the area when it has been trained for 30w rounds. Therefore, it can be seen that the proposed method is more practical than the previous methods.

Original languageEnglish
JournalInformation Systems Frontiers
Publication statusAccepted/In press - 2021

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Software
  • Information Systems
  • Computer Networks and Communications


Dive into the research topics of 'Deep Reinforcement Learning-Based Robot Exploration for Constructing Map of Unknown Environment'. Together they form a unique fingerprint.

Cite this