TY - GEN
T1 - GraphRC
T2 - 41st IEEE/ACM International Conference on Computer-Aided Design, ICCAD 2022
AU - Cheng, Wei
AU - Wu, Chun Feng
AU - Chang, Yuan Hao
AU - Lin, Ing Chao
N1 - Funding Information:
This work was supported in part by Academia Sinica under Grant AS-IA-lll-MOl, AS-GCS-110-08, and AS-CDA-107-M05. National Science and Technology Council under Grant 111-2223-E-001-001, 111-2221-E-001-013-MY3, 110-2221-E-006-084-MY3, and 109-2628- E-006-012-MY3. This work is also supported in part by Intelligent Manufacturing Research Center From the Featured Areas Research Center Program by the Ministry of Education, Taiwan (ROC).
Publisher Copyright:
© 2022 Association for Computing Machinery.
PY - 2022/10/30
Y1 - 2022/10/30
N2 - Architectural innovation in graph accelerators attracts research attention due to foreseeable inflation in data sizes and the irregular memory access pattern of graph algorithms. Conventional graph accelerators ignore the potential of Non-Volatile Memory (NVM) crossbar as a dual-addressing memory and treat it as a traditional single-addressing memory with higher density and better energy efficiency. In this work, we present GraphRC, a graph accelerator that leverages the power of dual-addressing memory by mapping in-edge/out-edge requests to column/row-oriented memory accesses. Although the capability of dual-addressing memory greatly improves the performance of graph processing, some memory accesses still suffer from low-utilization issues. Therefore, we propose a vertex merging (VM) method that improves cache block utilization rate by merging memory requests from consecutive vertices. VM reduces the execution time of all 6 graph algorithms on all 4 datasets by 24.24% on average. We then identify the data dependency inherent in a graph limits the usage of VM, and its effectiveness is bounded by the percentage of mergeable vertices. To overcome this limitation, we propose an aggressive vertex merging (AVM) method that outperforms VM by ignoring the data dependency inherent in a graph. AVM significantly reduces the execution time of ranking-based algorithms on all 4 datasets while preserving the correct ranking of the top 20 vertices.
AB - Architectural innovation in graph accelerators attracts research attention due to foreseeable inflation in data sizes and the irregular memory access pattern of graph algorithms. Conventional graph accelerators ignore the potential of Non-Volatile Memory (NVM) crossbar as a dual-addressing memory and treat it as a traditional single-addressing memory with higher density and better energy efficiency. In this work, we present GraphRC, a graph accelerator that leverages the power of dual-addressing memory by mapping in-edge/out-edge requests to column/row-oriented memory accesses. Although the capability of dual-addressing memory greatly improves the performance of graph processing, some memory accesses still suffer from low-utilization issues. Therefore, we propose a vertex merging (VM) method that improves cache block utilization rate by merging memory requests from consecutive vertices. VM reduces the execution time of all 6 graph algorithms on all 4 datasets by 24.24% on average. We then identify the data dependency inherent in a graph limits the usage of VM, and its effectiveness is bounded by the percentage of mergeable vertices. To overcome this limitation, we propose an aggressive vertex merging (AVM) method that outperforms VM by ignoring the data dependency inherent in a graph. AVM significantly reduces the execution time of ranking-based algorithms on all 4 datasets while preserving the correct ranking of the top 20 vertices.
UR - http://www.scopus.com/inward/record.url?scp=85145660403&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85145660403&partnerID=8YFLogxK
U2 - 10.1145/3508352.3549408
DO - 10.1145/3508352.3549408
M3 - Conference contribution
AN - SCOPUS:85145660403
T3 - IEEE/ACM International Conference on Computer-Aided Design, Digest of Technical Papers, ICCAD
BT - Proceedings of the 41st IEEE/ACM International Conference on Computer-Aided Design, ICCAD 2022
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 30 October 2022 through 4 November 2022
ER -