TY - GEN
T1 - Class-Aware Pruning for Efficient Neural Networks
AU - Jiang, Mengnan
AU - Wang, Jingcun
AU - Eldebiky, Amro
AU - Yin, Xunzhao
AU - Zhuo, Cheng
AU - Lin, Ing Chao
AU - Zhang, Grace Li
N1 - Publisher Copyright:
© 2024 EDAA.
PY - 2024
Y1 - 2024
N2 - Deep neural networks (DNNs) have demonstrated remarkable success in various fields. However, the large number of floating-point operations (FLOPs) in DNNs poses challenges for their deployment in resource-constrained applications, e.g., edge devices. To address the problem, pruning has been introduced to reduce the computational cost in executing DNNs. Previous pruning strategies are based on weight values, gradient values and activation outputs. Different from previous pruning solutions, in this paper, we propose a class-aware pruning technique to compress DNNs, which provides a novel perspective to reduce the computational cost of DNNs. In each iteration, the neural network training is modified to facilitate the class-aware pruning. Afterwards, the importance of filters with respect to the number of classes is evaluated. The filters that are only important for a few number of classes are removed. The neural network is then retrained to compensate for the incurred accuracy loss. The pruning iterations end until no filter can be removed anymore, indicating that the remaining filters are very important for many classes. This pruning technique outperforms previous pruning solutions in terms of accuracy, pruning ratio and the reduction of FLOPs. Experimental results confirm that this class-aware pruning technique can significantly reduce the number of weights and FLOPs, while maintaining a high inference accuracy. Our code is available at https://github.com/HWAI-TUDa/Class-Aware-Pruning
AB - Deep neural networks (DNNs) have demonstrated remarkable success in various fields. However, the large number of floating-point operations (FLOPs) in DNNs poses challenges for their deployment in resource-constrained applications, e.g., edge devices. To address the problem, pruning has been introduced to reduce the computational cost in executing DNNs. Previous pruning strategies are based on weight values, gradient values and activation outputs. Different from previous pruning solutions, in this paper, we propose a class-aware pruning technique to compress DNNs, which provides a novel perspective to reduce the computational cost of DNNs. In each iteration, the neural network training is modified to facilitate the class-aware pruning. Afterwards, the importance of filters with respect to the number of classes is evaluated. The filters that are only important for a few number of classes are removed. The neural network is then retrained to compensate for the incurred accuracy loss. The pruning iterations end until no filter can be removed anymore, indicating that the remaining filters are very important for many classes. This pruning technique outperforms previous pruning solutions in terms of accuracy, pruning ratio and the reduction of FLOPs. Experimental results confirm that this class-aware pruning technique can significantly reduce the number of weights and FLOPs, while maintaining a high inference accuracy. Our code is available at https://github.com/HWAI-TUDa/Class-Aware-Pruning
UR - http://www.scopus.com/inward/record.url?scp=85196526803&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85196526803&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85196526803
T3 - Proceedings -Design, Automation and Test in Europe, DATE
BT - 2024 Design, Automation and Test in Europe Conference and Exhibition, DATE 2024 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 Design, Automation and Test in Europe Conference and Exhibition, DATE 2024
Y2 - 25 March 2024 through 27 March 2024
ER -