跳至主導覽 跳至搜尋 跳過主要內容

Empirical study on security verification and assessment of neural network accelerator

研究成果: Article同行評審

摘要

With the significant success of machine learning, there are plenty of innovative neural network designs nowadays. The related applications become more and more pervasive in our daily life, even in life-critical domains such as autopilot and medical diagnosis, etc. In these domains, whether the AI-based system is “secure” or not is a critical issue. In this work, we first present six Hardware Trojan attacks with demonstrations of their impacts on the hardware design of neural networks. When data leakage occurs, we encode the leakage data to the output and make it more difficult to be detected. Most of our attacks can either achieve more than 98% attack success rate or leak out confidential data without causing any functional violation, with less than 1.5% overhead. We also discuss how to effectively and efficiently detect these Hardware Trojans with formal verification methods and further propose a risk assessment process to constitute a priority guidance to suggest security verification tasks of neuron network hardware. Based on our results, we strongly suggest that security specification and total verification are essential to neuron network designs.

原文English
文章編號104845
期刊Microprocessors and Microsystems
99
DOIs
出版狀態Published - 2023 6月

All Science Journal Classification (ASJC) codes

  • 軟體
  • 硬體和架構
  • 電腦網路與通信
  • 人工智慧

指紋

深入研究「Empirical study on security verification and assessment of neural network accelerator」主題。共同形成了獨特的指紋。

引用此