Modeling Interprocessor Communication and Performance Scalability for Distributed Deep Learning Systems

Yi Hong Lyu, Cheng Yueh Liu, Chen Pang Lee, Chia Heng Tu, Shih Hao Hung

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

While deep learning applications become popular, the design of deep learning systems is a critical task to unleash the computing power of underlying systems. Aside from the computing hardware, the computer networking is also a key factor that affects the delivered performance. When considering a large and complex model, the scalability of the system highly depends on the design of the networks, as well as the software behaviors. In this paper, we propose a profile-data-guided performance prediction method to estimate the performance of the system with desired high-speed interconnects, based on the profiling data obtained in a previous run. In particular, we leverage the open-source profiling tool, SOFA, for characterizing the software activities of deep learning software running in a computer cluster, and the characterized information is used to build the performance model for the model training process. When estimating the performance, SOFA is used to capture the performance critical factors for the model to make the predictions. To evaluate the proposed method, four popular deep learning models are adopted in our experiments, ResNet50, Inception3, Alexnet, and VGG16, where a computer cluster formed by four nodes is used to profile the training of the above models on TensorFlow. We ran the scalability analysis to analyze the size of the cluster, and the suitable computer networks for the models. By comparing the predicted data and those measured on the cluster, our model achieves up to 95% accuracy in most of the cases, with the maximum error rate of 10%.

Original languageEnglish
Title of host publication2019 International Conference on High Performance Computing and Simulation, HPCS 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages169-176
Number of pages8
ISBN (Electronic)9781728144849
DOIs
Publication statusPublished - 2019 Jul
Event2019 International Conference on High Performance Computing and Simulation, HPCS 2019 - Dublin, Ireland
Duration: 2019 Jul 152019 Jul 19

Publication series

Name2019 International Conference on High Performance Computing and Simulation, HPCS 2019

Conference

Conference2019 International Conference on High Performance Computing and Simulation, HPCS 2019
Country/TerritoryIreland
CityDublin
Period19-07-1519-07-19

All Science Journal Classification (ASJC) codes

  • Computer Science Applications
  • Hardware and Architecture
  • Modelling and Simulation
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Modeling Interprocessor Communication and Performance Scalability for Distributed Deep Learning Systems'. Together they form a unique fingerprint.

Cite this