跳至主導覽 跳至搜尋 跳過主要內容

Kernel mixture model for probability density estimation in Bayesian classifiers

研究成果: Article同行評審

32   連結會在新分頁中開啟 引文 斯高帕斯(Scopus)

摘要

Estimating reliable class-conditional probability is the prerequisite to implement Bayesian classifiers, and how to estimate the probability density functions (PDFs) is also a fundamental problem for other probabilistic induction algorithms. The finite mixture model (FMM) is able to represent arbitrary complex PDFs by using a mixture of mutimodal distributions, but it assumes that the component mixtures follows a given distribution, which may not be satisfied for real world data. This paper presents a non-parametric kernel mixture model (KMM) based probability density estimation approach, in which the data sample of a class is assumed to be drawn by several unknown independent hidden subclasses. Unlike traditional FMM schemes, we simply use the k-means clustering algorithm to partition the data sample into several independent components, and the regional density diversities of components are combined using the Bayes theorem. On the basis of the proposed kernel mixture model, we present a three-step Bayesian classifier, which includes partitioning, structure learning, and PDF estimation. Experimental results show that KMM is able to improve the quality of estimated PDFs of conventional kernel density estimation (KDE) method, and also show that KMM-based Bayesian classifiers outperforms existing Gaussian, GMM, and KDE-based Bayesian classifiers.

原文English
頁(從 - 到)675-707
頁數33
期刊Data Mining and Knowledge Discovery
32
發行號3
DOIs
出版狀態Published - 2018 5月 1

All Science Journal Classification (ASJC) codes

  • 資訊系統
  • 電腦科學應用
  • 電腦網路與通信

指紋

深入研究「Kernel mixture model for probability density estimation in Bayesian classifiers」主題。共同形成了獨特的指紋。

引用此