A General Mean-Based Iterative Winner-Take-All Neural Network

Jar-Ferr Yang, Chi Ming Chen, Wen Chung Wang, Jau Yien Lee

Research output: Contribution to journalArticle

18 Citations (Scopus)

Abstract

In this paper, a new iterative winner-take-all (WTA) neural network is developed and analyzed. The proposed WTA neural net with one-layer structure is established under the concept of the statistical mean. For three typical distributions of initial activations, the convergence behaviors of the existing and the proposed WTA neural nets are evaluated by theoretical analyses and Monte Carlo simulations. We found that the suggested WTA neural network on average requires fewer than Log2 M iterations to complete a WTA process for the three distributed inputs, where M is the number of competitors. Furthermore, the fault tolerances of the iterative WTA nets are analyzed and simulated. From the view points of convergence speed, hardware complexity, and robustness to the errors, the proposed WTA is suitable for various applications.

Original languageEnglish
Pages (from-to)14-24
Number of pages11
JournalIEEE Transactions on Neural Networks
Volume6
Issue number1
DOIs
Publication statusPublished - 1995 Jan 1

Fingerprint

Winner-take-all
Neural Networks
Neural networks
Neural Nets
Fault tolerance
Chemical activation
Hardware
Speed of Convergence
Fault Tolerance
Activation
Monte Carlo Simulation
Robustness
Iteration

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence

Cite this

Yang, Jar-Ferr ; Chen, Chi Ming ; Wang, Wen Chung ; Lee, Jau Yien. / A General Mean-Based Iterative Winner-Take-All Neural Network. In: IEEE Transactions on Neural Networks. 1995 ; Vol. 6, No. 1. pp. 14-24.
@article{743c5a3e909845d6b5362249529b0d6e,
title = "A General Mean-Based Iterative Winner-Take-All Neural Network",
abstract = "In this paper, a new iterative winner-take-all (WTA) neural network is developed and analyzed. The proposed WTA neural net with one-layer structure is established under the concept of the statistical mean. For three typical distributions of initial activations, the convergence behaviors of the existing and the proposed WTA neural nets are evaluated by theoretical analyses and Monte Carlo simulations. We found that the suggested WTA neural network on average requires fewer than Log2 M iterations to complete a WTA process for the three distributed inputs, where M is the number of competitors. Furthermore, the fault tolerances of the iterative WTA nets are analyzed and simulated. From the view points of convergence speed, hardware complexity, and robustness to the errors, the proposed WTA is suitable for various applications.",
author = "Jar-Ferr Yang and Chen, {Chi Ming} and Wang, {Wen Chung} and Lee, {Jau Yien}",
year = "1995",
month = "1",
day = "1",
doi = "10.1109/72.363454",
language = "English",
volume = "6",
pages = "14--24",
journal = "IEEE Transactions on Neural Networks and Learning Systems",
issn = "2162-237X",
publisher = "IEEE Computational Intelligence Society",
number = "1",

}

A General Mean-Based Iterative Winner-Take-All Neural Network. / Yang, Jar-Ferr; Chen, Chi Ming; Wang, Wen Chung; Lee, Jau Yien.

In: IEEE Transactions on Neural Networks, Vol. 6, No. 1, 01.01.1995, p. 14-24.

Research output: Contribution to journalArticle

TY - JOUR

T1 - A General Mean-Based Iterative Winner-Take-All Neural Network

AU - Yang, Jar-Ferr

AU - Chen, Chi Ming

AU - Wang, Wen Chung

AU - Lee, Jau Yien

PY - 1995/1/1

Y1 - 1995/1/1

N2 - In this paper, a new iterative winner-take-all (WTA) neural network is developed and analyzed. The proposed WTA neural net with one-layer structure is established under the concept of the statistical mean. For three typical distributions of initial activations, the convergence behaviors of the existing and the proposed WTA neural nets are evaluated by theoretical analyses and Monte Carlo simulations. We found that the suggested WTA neural network on average requires fewer than Log2 M iterations to complete a WTA process for the three distributed inputs, where M is the number of competitors. Furthermore, the fault tolerances of the iterative WTA nets are analyzed and simulated. From the view points of convergence speed, hardware complexity, and robustness to the errors, the proposed WTA is suitable for various applications.

AB - In this paper, a new iterative winner-take-all (WTA) neural network is developed and analyzed. The proposed WTA neural net with one-layer structure is established under the concept of the statistical mean. For three typical distributions of initial activations, the convergence behaviors of the existing and the proposed WTA neural nets are evaluated by theoretical analyses and Monte Carlo simulations. We found that the suggested WTA neural network on average requires fewer than Log2 M iterations to complete a WTA process for the three distributed inputs, where M is the number of competitors. Furthermore, the fault tolerances of the iterative WTA nets are analyzed and simulated. From the view points of convergence speed, hardware complexity, and robustness to the errors, the proposed WTA is suitable for various applications.

UR - http://www.scopus.com/inward/record.url?scp=0029207694&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0029207694&partnerID=8YFLogxK

U2 - 10.1109/72.363454

DO - 10.1109/72.363454

M3 - Article

VL - 6

SP - 14

EP - 24

JO - IEEE Transactions on Neural Networks and Learning Systems

JF - IEEE Transactions on Neural Networks and Learning Systems

SN - 2162-237X

IS - 1

ER -