Abstract
This paper discusses research on scalable VLSI implementations of feed-forward and recurrent neural networks. These two families of networks are useful in a wide variety of important applications-classification tasks for feed-forward nets and optimization problems for recurrent nets-but their differences affect the way they should be built. We find that analog computation with digitally programmable weights works best for feed-forward networks, while stochastic processing takes advantage of the integrative nature of recurrent networks. We have shown early prototypes of these networks which compute at rates of 1-2 billion connections per second. These general-purpose neural building blocks can be coupled with an overall data transmission framework that is electronically reconfigured in a local manner to produce arbitrarily large, fault-tolerant networks.
Original language | English |
---|---|
Pages (from-to) | 367-385 |
Number of pages | 19 |
Journal | Journal of VLSI Signal Processing |
Volume | 1 |
Issue number | 4 |
DOIs | |
Publication status | Published - 1990 Apr |
All Science Journal Classification (ASJC) codes
- Signal Processing
- Information Systems
- Electrical and Electronic Engineering