Face recognition for images acquired from uncontrollable environment and target positions is a challenging task. These input images are first pre-processed and initially aligned by the face detection algorithm. However, there are still some residual geometric errors after the initial alignment by the face detection algorithm. If we don't take these errors into account, the recognition performance should be unacceptable. Although some iterative optimization algorithms can be used to fine-tune alignment during recognition, it increases computation load significantly. A two-stage face recognition system is proposed which comprises a block-based recognition algorithm to provide sufficient tolerance for geometric errors and then followed by a pixel-based recognition algorithm which only needs to evaluate a candidate subset from the previous stage. From simulation results, we find that this proposed system can reduce the average computation complexity about 69% and achieve promising performance.