Backward Q-learning: The combination of Sarsa algorithm and Q-learning

Yin Hao Wang, Tzuu Hseng S. Li, Chih Jui Lin

研究成果: Article同行評審

46 引文 斯高帕斯(Scopus)

摘要

Reinforcement learning (RL) has been applied to many fields and applications, but there are still some dilemmas between exploration and exploitation strategy for action selection policy. The well-known areas of reinforcement learning are the Q-learning and the Sarsa algorithms, but they possess different characteristics. Generally speaking, the Sarsa algorithm has faster convergence characteristics, while the Q-learning algorithm has a better final performance. However, Sarsa algorithm is easily stuck in the local minimum and Q-learning needs longer time to learn. Most literatures investigated the action selection policy. Instead of studying an action selection strategy, this paper focuses on how to combine Q-learning with the Sarsa algorithm, and presents a new method, called backward Q-learning, which can be implemented in the Sarsa algorithm and Q-learning. The backward Q-learning algorithm directly tunes the Q-values, and then the Q-values will indirectly affect the action selection policy. Therefore, the proposed RL algorithms can enhance learning speed and improve final performance. Finally, three experimental results including cliff walk, mountain car, and cart-pole balancing control system are utilized to verify the feasibility and effectiveness of the proposed scheme. All the simulations illustrate that the backward Q-learning based RL algorithm outperforms the well-known Q-learning and the Sarsa algorithm.

原文English
頁(從 - 到)2184-2193
頁數10
期刊Engineering Applications of Artificial Intelligence
26
發行號9
DOIs
出版狀態Published - 2013

All Science Journal Classification (ASJC) codes

  • Control and Systems Engineering
  • Artificial Intelligence
  • Electrical and Electronic Engineering

指紋 深入研究「Backward Q-learning: The combination of Sarsa algorithm and Q-learning」主題。共同形成了獨特的指紋。

引用此