Towards positive-breakdown radial basis function networks

Sheng Tun Li, Ernst L. Leiss

Research output: Contribution to journalConference articlepeer-review

1 Citation (Scopus)

Abstract

Radial basis function networks (RBFNs) have recently attracted interest, because of their advantages over multilayer perceptrons as they are universal approximators but achieve faster convergence since only one layer of weights is required. The least squares method is the most popularly used in estimating the synaptic weights which provides optimal results if the underlying error distribution is Gaussian. However, the generalization performance of the networks deteriorates for realistic noise whose distribution is either unknown or non-Gaussian; in particular, it becomes very bad if outliers are present. In this paper we propose a positive-breakdown learning algorithm for RBFNs by applying the breakdown point approach in robust regression such that any assumptions about or estimation of the error distribution are avoidable. The expense of losing efficiency in the presence of Gaussian noise and the problem of local minima for most robust estimators has also been taken into account. The resulting network is shown to be highly robust and stable against a high fraction of outliers as well as small perturbations. This demonstrates its superiority in controlling bias and variance of estimators.

Original languageEnglish
Pages (from-to)98-105
Number of pages8
JournalProceedings of the International Conference on Tools with Artificial Intelligence
Publication statusPublished - 1995
EventProceedings of the 1995 IEEE 7th International Conference on Tools with Artificial Intelligence - Herndon, VA, USA
Duration: 1995 Nov 51995 Nov 8

All Science Journal Classification (ASJC) codes

  • Software

Fingerprint

Dive into the research topics of 'Towards positive-breakdown radial basis function networks'. Together they form a unique fingerprint.

Cite this