In the past decade, several rate-based simulation approaches were proposed to predict software failure process. But most of them did not take the number of available debuggers into consideration and this may not be reasonable. In practice, the number of debuggers is always limited and controlled. If all debuggers or developers are busy, the new detected faults should be willing to wait (for a long time to be corrected and removed). Besides, practical experiences also show that the fault removal time is non-negligible and the number of removed faults generally lags behind the total number of detected faults. Based on these facts, in this paper, we will apply queueing theory to describe and explain the possible debugging behavior during software development. Two simulation procedures are developed based on G/G/ ∞ and G/G/m queueing models. The proposed methods will be illustrated with real software failure data. Experimental results will be analyzed and discussed in detail. The results we obtained will greatly help to understand the influence of size of debugger teams on the software failure correction activities and other related reliability assessments.