TY - GEN
T1 - Area-Efficient Hardware Design for Approximate Basis Conversion in RNS-Variant CKKS Schemes
AU - Wu, Qi Xian
AU - Huang, Tsu Hsiung
AU - Shieh, Ming Der
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Residue number system (RNS) is a fundamental optimization widely employed in cryptography. In applications such as homomorphic encryption (HE) schemes, the required modulus size can reach several thousand bits. By using RNS decomposition, modular multiplication under a large modulus is performed in parallel over smaller moduli. During the HE procedure, the modulus size is dynamically scaled up or down through basis conversion in the RNS domain. This paper focuses on basis conversion within the Cheon-Kim-Kim-Song (CKKS) scheme, a popular candidate for privacy-preserving machine learning. Due to its approximate nature, fast basis conversion (FBC) is used to accelerate switching between RNS bases, as small errors can be tolerated. However, the pre-processing stage of the FBC algorithm involves unavoidable modular reductions, resulting in significant area overhead and low utilization. In this work, we present a novel approximate basis conversion (ABC) method that eliminates the need for modular reductions while enabling reuse of the dot product hardware structure. Experimental results demonstrate that the proposed ABC achieves a 64.4% reduction in area-time products under various usage scenarios compared to FBC in CKKS-based computations.
AB - Residue number system (RNS) is a fundamental optimization widely employed in cryptography. In applications such as homomorphic encryption (HE) schemes, the required modulus size can reach several thousand bits. By using RNS decomposition, modular multiplication under a large modulus is performed in parallel over smaller moduli. During the HE procedure, the modulus size is dynamically scaled up or down through basis conversion in the RNS domain. This paper focuses on basis conversion within the Cheon-Kim-Kim-Song (CKKS) scheme, a popular candidate for privacy-preserving machine learning. Due to its approximate nature, fast basis conversion (FBC) is used to accelerate switching between RNS bases, as small errors can be tolerated. However, the pre-processing stage of the FBC algorithm involves unavoidable modular reductions, resulting in significant area overhead and low utilization. In this work, we present a novel approximate basis conversion (ABC) method that eliminates the need for modular reductions while enabling reuse of the dot product hardware structure. Experimental results demonstrate that the proposed ABC achieves a 64.4% reduction in area-time products under various usage scenarios compared to FBC in CKKS-based computations.
UR - https://www.scopus.com/pages/publications/105030498855
UR - https://www.scopus.com/pages/publications/105030498855#tab=citedBy
U2 - 10.1109/ICECS66544.2025.11270539
DO - 10.1109/ICECS66544.2025.11270539
M3 - Conference contribution
AN - SCOPUS:105030498855
T3 - 2025 32nd IEEE International Conference on Electronics, Circuits and Systems, ICECS 2025
BT - 2025 32nd IEEE International Conference on Electronics, Circuits and Systems, ICECS 2025
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 32nd IEEE International Conference on Electronics, Circuits and Systems, ICECS 2025
Y2 - 17 November 2025 through 19 November 2025
ER -