A 12TOPS/W Computing-in-Memory Accelerator for Convolutional Neural Networks

Jun Hui Fu, Soon Jyh Chang

研究成果: Conference contribution

摘要

This paper presents a charge redistribution based computing-in-memory (CIM) accelerator for convolutional neural networks (CNNs). This CIM macro adopts 9T static random access memory (SRAM) with a read-decoupled port to avoid read-disturbing and perform the analog computation for further diminishing the energy consumption per arithmetic operation. Weighted capacitor switching technique is proposed to achieve a better linearity performance than conventional current charging/discharging scheme and reduce the number of analog-to-digital converters (ADC). Moreover, low multiply-accumulate (MAC) value skipping technique is also proposed to enhance the speed and reduce the power consumption of the CIM macro by skipping the first few bits during the analog-to-digital conversion. The proposed CIM macro was fabricated in TSMC 40-nm CMOS process. Measurement results show that the proof-of-concept prototype achieves an energy efficiency of 12.02 TOPS/W under 8-bit input and 8-bit weight resolution.

原文English
主出版物標題IEEE International Symposium on Circuits and Systems, ISCAS 2022
發行者Institute of Electrical and Electronics Engineers Inc.
頁面586-589
頁數4
ISBN(電子)9781665484855
DOIs
出版狀態Published - 2022
事件2022 IEEE International Symposium on Circuits and Systems, ISCAS 2022 - Austin, United States
持續時間: 2022 5月 272022 6月 1

出版系列

名字Proceedings - IEEE International Symposium on Circuits and Systems
2022-May
ISSN(列印)0271-4310

Conference

Conference2022 IEEE International Symposium on Circuits and Systems, ISCAS 2022
國家/地區United States
城市Austin
期間22-05-2722-06-01

All Science Journal Classification (ASJC) codes

  • 電氣與電子工程

指紋

深入研究「A 12TOPS/W Computing-in-Memory Accelerator for Convolutional Neural Networks」主題。共同形成了獨特的指紋。

引用此