Impacts and solutions of nonvolatile-memory-induced weight error in the computing-in-memory neural network system

Yu Hsuan Lin, Dai Ying Lee, Chao Hung Wang, Ming Liang Wei, Ming Hsiu Lee, Hsiang Lan Lung, Kuang Yeu Hsieh, Keh Chung Wang, Chih Yuan Lu

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Nonvolatile-memory-based computing-in-memory architecture is one of the solutions to the massive data movement problem in the conventional von Neumann computing architecture since multiplication-and-accumulation (MAC) operations can be directly performed inside the memory array. This paper investigates the errors from the imperfections of resistive random access memory, including program error, read fluctuation and retention drift, and their impacts on the inference accuracy in convolutional neural network. The influences from weight errors in each convolution layer are evaluated according to the change of neuron distributions. A batch normalization (BN) parameter calibration method is proposed in order to correctly scale-and-shift the MAC results to compensate weight errors. This calibrated BN process drastically improves the inference accuracy not only for as-programmed analog ReRAM array but also for devices after longtime retention. This approach provides an effective direction to deal with the nonvolatile-memory-induced errors in artificial neural networks.

Original languageEnglish
Article numberSGGB15
JournalJapanese journal of applied physics
Volume59
Issue numberSG
DOIs
Publication statusPublished - 2020 Apr 1

All Science Journal Classification (ASJC) codes

  • General Engineering
  • General Physics and Astronomy

Fingerprint

Dive into the research topics of 'Impacts and solutions of nonvolatile-memory-induced weight error in the computing-in-memory neural network system'. Together they form a unique fingerprint.

Cite this