Parametric methods for comparing the performance of two classification algorithms evaluated by k-fold cross validation on multiple data sets

Research output: Contribution to journalArticlepeer-review

21 Citations (Scopus)

Abstract

A popular procedure for identifying which one of two classification algorithms has a better performance is to test them on multiple data sets, and the accuracies resulting from k-fold cross validation are aggregated to draw a conclusion. Several nonparametric methods have been proposed for this purpose, while parametric methods will be a better choice to determine the superior algorithm when the assumptions for deriving sampling distributions can be satisfied. In this paper, we consider every accuracy estimate resulting from the instances in a fold or a data set as a point estimator instead of a fixed value to derive the sampling distribution of the point estimator for comparing the performance of two classification algorithms. The test statistics for both data-set and fold averaging levels are proposed, and the ways to calculate their degrees of freedom are also presented. Twelve data sets are chosen to demonstrate that our parametric methods can be used to effectively compare the performance of two classification algorithms on multiple data sets. Several critical issues in using our parametric methods and the nonparametric ones proposed in a previous study are then discussed.

Original languageEnglish
Pages (from-to)97-107
Number of pages11
JournalPattern Recognition
Volume65
DOIs
Publication statusPublished - 2017 May 1

All Science Journal Classification (ASJC) codes

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'Parametric methods for comparing the performance of two classification algorithms evaluated by k-fold cross validation on multiple data sets'. Together they form a unique fingerprint.

Cite this