Dependency Analysis of Accuracy Estimates in k-Fold Cross Validation

Tzu Tsung Wong, Nai Yu Yang

Research output: Contribution to journalArticlepeer-review

112 Citations (Scopus)


A standard procedure for evaluating the performance of classification algorithms is k-fold cross validation. Since the training sets for any pair of iterations in k-fold cross validation are overlapping when the number of folds is larger than two, the resulting accuracy estimates are considered to be dependent. In this paper, the overlapping of training sets is shown to be irrelevant in determining whether two fold accuracies are dependent or not. Then a statistical method is proposed to test the appropriateness of assuming independence for the accuracy estimates in k-fold cross validation. This method is applied on 20 data sets, and the experimental results suggest that it is generally appropriate to assume that the fold accuracies are independent. The cross validation of non-overlapping training sets can make fold accuracies to be dependent. However, this dependence almost has no impact on estimating the sample variance of fold accuracies, and hence they can generally be assumed to be independent.

Original languageEnglish
Article number8012491
Pages (from-to)2417-2427
Number of pages11
JournalIEEE Transactions on Knowledge and Data Engineering
Issue number11
Publication statusPublished - 2017 Nov 1

All Science Journal Classification (ASJC) codes

  • Information Systems
  • Computer Science Applications
  • Computational Theory and Mathematics


Dive into the research topics of 'Dependency Analysis of Accuracy Estimates in k-Fold Cross Validation'. Together they form a unique fingerprint.

Cite this