Scalable VLSI implementations for neural networks

D. van den Bout, P. Franzon, J. Paulos, T. Miller, W. Snyder, T. Nagle, W. Liu

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

This paper discusses research on scalable VLSI implementations of feed-forward and recurrent neural networks. These two families of networks are useful in a wide variety of important applications-classification tasks for feed-forward nets and optimization problems for recurrent nets-but their differences affect the way they should be built. We find that analog computation with digitally programmable weights works best for feed-forward networks, while stochastic processing takes advantage of the integrative nature of recurrent networks. We have shown early prototypes of these networks which compute at rates of 1-2 billion connections per second. These general-purpose neural building blocks can be coupled with an overall data transmission framework that is electronically reconfigured in a local manner to produce arbitrarily large, fault-tolerant networks.

Original languageEnglish
Pages (from-to)367-385
Number of pages19
JournalJournal of VLSI Signal Processing
Volume1
Issue number4
DOIs
Publication statusPublished - 1990 Apr

All Science Journal Classification (ASJC) codes

  • Signal Processing
  • Information Systems
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Scalable VLSI implementations for neural networks'. Together they form a unique fingerprint.

Cite this