Adaptive cache pre-forwarding policy for distributed deep learning

Sheng Tzong Cheng, Chih Wei Hsu, Gwo Jiun Horng, Che Hsuan Lin

研究成果: Article

摘要

With the rapid growth of deep learning algorithms, several high-accuracy models have been developed and applied to many real-world domains. Deep learning is parallel and suitable for distributed computing, which can significantly improve the system throughput. However, there is a bottleneck for cross-machine training, that is, network latency. Nodes frequently need to wait for synchronization, and the content of each synchronization may range from several megabytes to hundred megabytes. Thus, network communication takes considerable time in the training process, which reduces system performance. Therefore, many computing architectures have been proposed. This paper proposes a type of distributed computing system for deep learning. Our design aims to reduce synchronization times and network blocking times by using a new cache mechanism, called cache pre-forwarding. The design concept of cache pre-forwarding aims to exploit reinforcement learning to train a pre-forwarding policy to increase the cache hit rate. Because of the features of reinforcement learning, our policy is adaptive and applicable to different computing environments. Finally, we experimentally demonstrate that our system is feasible.

原文English
文章編號106558
期刊Computers and Electrical Engineering
82
DOIs
出版狀態Published - 2020 三月

    指紋

All Science Journal Classification (ASJC) codes

  • Control and Systems Engineering
  • Computer Science(all)
  • Electrical and Electronic Engineering

引用此