TY - GEN
T1 - DiffuCE
T2 - 2025 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2025
AU - Su, Fang Yi
AU - Chang, Tzu Hung
AU - Chiang, Jung Hsien
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Cone-Beam Computed Tomography (CBCT) has gar-nered significant attention due to lower radiation dosage and faster scanning time, which has been widely used in clinical applications for decades. However, its poor image quality is always challenging to clinical experts. To address this problem, we propose our work DiffuCE, a Diffusion model framework for CBCT Enhancement. The main contributions of our work are three-fold: (1) Increased Gen-eralizability: Our training data exclusively comprises pixel space data, eliminating the necessity for additional imaging machine settings. This emphasizes the model's ability to generalize effectively across diverse conditions. (2) Effi-cient Training: Rather than starting from scratch, our approachfine-tunesfrom a well-established foundation model. This illustrates the viability of efficient training strategies for medical image restoration tasks, optimizing resource utilization. (3) Competitive Performance: DiffuCE exhibits outstanding performance, excelling in FID and LPIPS with 0.01 and 36.99 ahead of the second place in the private set. In the public dataset, DiffuCE has a competitive performance compared to other SOTAs. Moreover, in expert assessments, DiffuCE achieves the highest score of 7.06 for overall satisfaction, which is 1.38 ahead of the second place, affirming its performance from a clinical stand-point. Codes are available at https://github.com/lzh107u/DiffuCE
AB - Cone-Beam Computed Tomography (CBCT) has gar-nered significant attention due to lower radiation dosage and faster scanning time, which has been widely used in clinical applications for decades. However, its poor image quality is always challenging to clinical experts. To address this problem, we propose our work DiffuCE, a Diffusion model framework for CBCT Enhancement. The main contributions of our work are three-fold: (1) Increased Gen-eralizability: Our training data exclusively comprises pixel space data, eliminating the necessity for additional imaging machine settings. This emphasizes the model's ability to generalize effectively across diverse conditions. (2) Effi-cient Training: Rather than starting from scratch, our approachfine-tunesfrom a well-established foundation model. This illustrates the viability of efficient training strategies for medical image restoration tasks, optimizing resource utilization. (3) Competitive Performance: DiffuCE exhibits outstanding performance, excelling in FID and LPIPS with 0.01 and 36.99 ahead of the second place in the private set. In the public dataset, DiffuCE has a competitive performance compared to other SOTAs. Moreover, in expert assessments, DiffuCE achieves the highest score of 7.06 for overall satisfaction, which is 1.38 ahead of the second place, affirming its performance from a clinical stand-point. Codes are available at https://github.com/lzh107u/DiffuCE
UR - https://www.scopus.com/pages/publications/105003624421
UR - https://www.scopus.com/pages/publications/105003624421#tab=citedBy
U2 - 10.1109/WACV61041.2025.00455
DO - 10.1109/WACV61041.2025.00455
M3 - Conference contribution
AN - SCOPUS:105003624421
T3 - Proceedings - 2025 IEEE Winter Conference on Applications of Computer Vision, WACV 2025
SP - 4635
EP - 4644
BT - Proceedings - 2025 IEEE Winter Conference on Applications of Computer Vision, WACV 2025
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 28 February 2025 through 4 March 2025
ER -