Parametric methods for comparing the performance of two classification algorithms evaluated by k-fold cross validation on multiple data sets

研究成果: Article同行評審

61 引文 斯高帕斯(Scopus)

摘要

A popular procedure for identifying which one of two classification algorithms has a better performance is to test them on multiple data sets, and the accuracies resulting from k-fold cross validation are aggregated to draw a conclusion. Several nonparametric methods have been proposed for this purpose, while parametric methods will be a better choice to determine the superior algorithm when the assumptions for deriving sampling distributions can be satisfied. In this paper, we consider every accuracy estimate resulting from the instances in a fold or a data set as a point estimator instead of a fixed value to derive the sampling distribution of the point estimator for comparing the performance of two classification algorithms. The test statistics for both data-set and fold averaging levels are proposed, and the ways to calculate their degrees of freedom are also presented. Twelve data sets are chosen to demonstrate that our parametric methods can be used to effectively compare the performance of two classification algorithms on multiple data sets. Several critical issues in using our parametric methods and the nonparametric ones proposed in a previous study are then discussed.

原文English
頁(從 - 到)97-107
頁數11
期刊Pattern Recognition
65
DOIs
出版狀態Published - 2017 5月 1

All Science Journal Classification (ASJC) codes

  • 軟體
  • 訊號處理
  • 電腦視覺和模式識別
  • 人工智慧

指紋

深入研究「Parametric methods for comparing the performance of two classification algorithms evaluated by k-fold cross validation on multiple data sets」主題。共同形成了獨特的指紋。

引用此