Design and implementation of a self-adaptive fuzzy inference engine

Jer-Min Jou, Pei-Yin Chen, Sheng Fu Yang

研究成果: Paper

2 引文 (Scopus)

摘要

A pipelining fuzzy inference chip with a self-tunable knowledge base is presented in this paper. Up to 49 rules are inferred in parallel in the chip, and the memory size of its knowledge (rule) base is only 84 bytes since the memory-efficient and adjustable fuzzy rule format as well as the dynamic rule generating circuits are used. Based on these mechanism and a rule weight tuner, the possibility of narrowing, widening, moving, amplifying, and/or dampening the membership functions is provided in it, and makes the inference process self-adaptive. A three-stage pipeline in the parallel inference architecture let the chip very fast. It can yield an inference rate of 467K inferences/sec operating at a clock rate of 30 MHz.

原文English
頁面1633-1640
頁數8
出版狀態Published - 1995 一月 1
事件Proceedings of the 1995 IEEE International Conference on Fuzzy Systems. Part 1 (of 5) - Yokohama, Jpn
持續時間: 1995 三月 201995 三月 24

Other

OtherProceedings of the 1995 IEEE International Conference on Fuzzy Systems. Part 1 (of 5)
城市Yokohama, Jpn
期間95-03-2095-03-24

指紋

Inference engines
Inference Engine
Fuzzy Inference
Fuzzy inference
Data storage equipment
Chip
Fuzzy rules
Membership functions
Clocks
Pipelines
Pipelining
Networks (circuits)
Rule Base
Fuzzy Rules
Membership Function
Knowledge Base
Design

All Science Journal Classification (ASJC) codes

  • Software
  • Theoretical Computer Science
  • Artificial Intelligence
  • Applied Mathematics

引用此文

Jou, J-M., Chen, P-Y., & Yang, S. F. (1995). Design and implementation of a self-adaptive fuzzy inference engine. 1633-1640. 論文發表於 Proceedings of the 1995 IEEE International Conference on Fuzzy Systems. Part 1 (of 5), Yokohama, Jpn, .
Jou, Jer-Min ; Chen, Pei-Yin ; Yang, Sheng Fu. / Design and implementation of a self-adaptive fuzzy inference engine. 論文發表於 Proceedings of the 1995 IEEE International Conference on Fuzzy Systems. Part 1 (of 5), Yokohama, Jpn, .8 p.
@conference{917af2ec8f3a4f7ea07d4f725501ed2b,
title = "Design and implementation of a self-adaptive fuzzy inference engine",
abstract = "A pipelining fuzzy inference chip with a self-tunable knowledge base is presented in this paper. Up to 49 rules are inferred in parallel in the chip, and the memory size of its knowledge (rule) base is only 84 bytes since the memory-efficient and adjustable fuzzy rule format as well as the dynamic rule generating circuits are used. Based on these mechanism and a rule weight tuner, the possibility of narrowing, widening, moving, amplifying, and/or dampening the membership functions is provided in it, and makes the inference process self-adaptive. A three-stage pipeline in the parallel inference architecture let the chip very fast. It can yield an inference rate of 467K inferences/sec operating at a clock rate of 30 MHz.",
author = "Jer-Min Jou and Pei-Yin Chen and Yang, {Sheng Fu}",
year = "1995",
month = "1",
day = "1",
language = "English",
pages = "1633--1640",
note = "Proceedings of the 1995 IEEE International Conference on Fuzzy Systems. Part 1 (of 5) ; Conference date: 20-03-1995 Through 24-03-1995",

}

Jou, J-M, Chen, P-Y & Yang, SF 1995, 'Design and implementation of a self-adaptive fuzzy inference engine', 論文發表於 Proceedings of the 1995 IEEE International Conference on Fuzzy Systems. Part 1 (of 5), Yokohama, Jpn, 95-03-20 - 95-03-24 頁 1633-1640.

Design and implementation of a self-adaptive fuzzy inference engine. / Jou, Jer-Min; Chen, Pei-Yin; Yang, Sheng Fu.

1995. 1633-1640 論文發表於 Proceedings of the 1995 IEEE International Conference on Fuzzy Systems. Part 1 (of 5), Yokohama, Jpn, .

研究成果: Paper

TY - CONF

T1 - Design and implementation of a self-adaptive fuzzy inference engine

AU - Jou, Jer-Min

AU - Chen, Pei-Yin

AU - Yang, Sheng Fu

PY - 1995/1/1

Y1 - 1995/1/1

N2 - A pipelining fuzzy inference chip with a self-tunable knowledge base is presented in this paper. Up to 49 rules are inferred in parallel in the chip, and the memory size of its knowledge (rule) base is only 84 bytes since the memory-efficient and adjustable fuzzy rule format as well as the dynamic rule generating circuits are used. Based on these mechanism and a rule weight tuner, the possibility of narrowing, widening, moving, amplifying, and/or dampening the membership functions is provided in it, and makes the inference process self-adaptive. A three-stage pipeline in the parallel inference architecture let the chip very fast. It can yield an inference rate of 467K inferences/sec operating at a clock rate of 30 MHz.

AB - A pipelining fuzzy inference chip with a self-tunable knowledge base is presented in this paper. Up to 49 rules are inferred in parallel in the chip, and the memory size of its knowledge (rule) base is only 84 bytes since the memory-efficient and adjustable fuzzy rule format as well as the dynamic rule generating circuits are used. Based on these mechanism and a rule weight tuner, the possibility of narrowing, widening, moving, amplifying, and/or dampening the membership functions is provided in it, and makes the inference process self-adaptive. A three-stage pipeline in the parallel inference architecture let the chip very fast. It can yield an inference rate of 467K inferences/sec operating at a clock rate of 30 MHz.

UR - http://www.scopus.com/inward/record.url?scp=0029237863&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0029237863&partnerID=8YFLogxK

M3 - Paper

AN - SCOPUS:0029237863

SP - 1633

EP - 1640

ER -

Jou J-M, Chen P-Y, Yang SF. Design and implementation of a self-adaptive fuzzy inference engine. 1995. 論文發表於 Proceedings of the 1995 IEEE International Conference on Fuzzy Systems. Part 1 (of 5), Yokohama, Jpn, .