TY - JOUR
T1 - Collaborative framework of accelerating reinforcement learning training with supervised learning based on edge computing
AU - Lin, Yu Shan
AU - Lai, Chin Feng
AU - Chuang, Chieh Lin
AU - Ge, Xiaohu
AU - Chao, Han Chieh
N1 - Funding Information:
Xiaohu Ge would like to acknowledge the support from the National Key Research and Development Program of China under Grant 2017YFE0121600. Han-Chieh Chao would like to acknowledge the support from Taiwan Ministry of Science and Technology under Grant 107-2221-E-259-005-MY3.
Publisher Copyright:
© 2021 Taiwan Academic Network Management Committee. All rights reserved.
PY - 2021
Y1 - 2021
N2 - In the reinforcement learning model training, it usually takes a lot of training data and computing time to find the law from the environmental response in order to facilitate the convergence of the model. However, edge nodes usually do not have powerful computing capabilities, which makes it impossible to apply reinforcement learning models to edge computing nodes. Therefore, the framework proposed in this study can enable the reinforcement learning model to gradually converge to the parameters of the supervised learning model within the shorter computing time, so as to solve the problem of insufficient terminal device performance in edge computing. Among the experimental results, the operating differences of hardware with different performance and the influence of the network environment and neural network architecture are analyzed based on the Mnist and Mall data sets. The result shows that it is sufficient to load the real-time required by users under the framework of collaborative training, and the time delay pressure on the model is caused by the application of different levels of complexity.
AB - In the reinforcement learning model training, it usually takes a lot of training data and computing time to find the law from the environmental response in order to facilitate the convergence of the model. However, edge nodes usually do not have powerful computing capabilities, which makes it impossible to apply reinforcement learning models to edge computing nodes. Therefore, the framework proposed in this study can enable the reinforcement learning model to gradually converge to the parameters of the supervised learning model within the shorter computing time, so as to solve the problem of insufficient terminal device performance in edge computing. Among the experimental results, the operating differences of hardware with different performance and the influence of the network environment and neural network architecture are analyzed based on the Mnist and Mall data sets. The result shows that it is sufficient to load the real-time required by users under the framework of collaborative training, and the time delay pressure on the model is caused by the application of different levels of complexity.
UR - http://www.scopus.com/inward/record.url?scp=85103667871&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85103667871&partnerID=8YFLogxK
U2 - 10.3966/160792642021032202001
DO - 10.3966/160792642021032202001
M3 - Article
AN - SCOPUS:85103667871
SN - 1607-9264
VL - 22
SP - 229
EP - 238
JO - Journal of Internet Technology
JF - Journal of Internet Technology
IS - 2
ER -