This paper presents a charge redistribution based computing-in-memory (CIM) accelerator for convolutional neural networks (CNNs). This CIM macro adopts 9T static random access memory (SRAM) with a read-decoupled port to avoid read-disturbing and perform the analog computation for further diminishing the energy consumption per arithmetic operation. Weighted capacitor switching technique is proposed to achieve a better linearity performance than conventional current charging/discharging scheme and reduce the number of analog-to-digital converters (ADC). Moreover, low multiply-accumulate (MAC) value skipping technique is also proposed to enhance the speed and reduce the power consumption of the CIM macro by skipping the first few bits during the analog-to-digital conversion. The proposed CIM macro was fabricated in TSMC 40-nm CMOS process. Measurement results show that the proof-of-concept prototype achieves an energy efficiency of 12.02 TOPS/W under 8-bit input and 8-bit weight resolution.