An adaptive recurrent neuro-fuzzy filter for noisy speech enhancement

Sheng Nan Wu, Jeen-Shing Wang

Research output: Contribution to journalConference article

2 Citations (Scopus)

Abstract

This paper presents a novel adaptive recurrent neuro-fuzzy filter (ARNFF) for speech enhancement in noisy environment. The speech enhancement scheme consists of two microphones that receive a primary and a reference input source respectively, and the proposed ARNFF that attenuates the noise corrupting the original speech signal in the primary channel. The ARNFF is a connectionist network that can be translated effortlessly into a set of dynamic fuzzy rules and state-space equations as well. An effective learning algorithm, consisting of a clustering algorithm for the structure learning and a recurrent learning algorithm for the parameter learning, is adopted from our previous research for the ARNNF construction. From our computer simulations and comparisons with some existing filters, the advantages of the proposed ARNFF for noisy speech enhancement include: 1) a more compact filter structure, 2) no a priori knowledge needed for the exact lagged order of the input variables, 3) a better performance in long-delay environment.

Original languageEnglish
Pages (from-to)3083-3088
Number of pages6
JournalIEEE International Conference on Neural Networks - Conference Proceedings
Volume4
Publication statusPublished - 2004 Dec 1
Event2004 IEEE International Joint Conference on Neural Networks - Proceedings - Budapest, Hungary
Duration: 2004 Jul 252004 Jul 29

Fingerprint

Fuzzy filters
Speech enhancement
Learning algorithms
Fuzzy rules
Microphones
Clustering algorithms
Computer simulation

All Science Journal Classification (ASJC) codes

  • Software

Cite this

@article{59d8678efdf2403d938d9d8372bcd2f3,
title = "An adaptive recurrent neuro-fuzzy filter for noisy speech enhancement",
abstract = "This paper presents a novel adaptive recurrent neuro-fuzzy filter (ARNFF) for speech enhancement in noisy environment. The speech enhancement scheme consists of two microphones that receive a primary and a reference input source respectively, and the proposed ARNFF that attenuates the noise corrupting the original speech signal in the primary channel. The ARNFF is a connectionist network that can be translated effortlessly into a set of dynamic fuzzy rules and state-space equations as well. An effective learning algorithm, consisting of a clustering algorithm for the structure learning and a recurrent learning algorithm for the parameter learning, is adopted from our previous research for the ARNNF construction. From our computer simulations and comparisons with some existing filters, the advantages of the proposed ARNFF for noisy speech enhancement include: 1) a more compact filter structure, 2) no a priori knowledge needed for the exact lagged order of the input variables, 3) a better performance in long-delay environment.",
author = "Wu, {Sheng Nan} and Jeen-Shing Wang",
year = "2004",
month = "12",
day = "1",
language = "English",
volume = "4",
pages = "3083--3088",
journal = "IEEE International Conference on Neural Networks - Conference Proceedings",
issn = "1098-7576",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

An adaptive recurrent neuro-fuzzy filter for noisy speech enhancement. / Wu, Sheng Nan; Wang, Jeen-Shing.

In: IEEE International Conference on Neural Networks - Conference Proceedings, Vol. 4, 01.12.2004, p. 3083-3088.

Research output: Contribution to journalConference article

TY - JOUR

T1 - An adaptive recurrent neuro-fuzzy filter for noisy speech enhancement

AU - Wu, Sheng Nan

AU - Wang, Jeen-Shing

PY - 2004/12/1

Y1 - 2004/12/1

N2 - This paper presents a novel adaptive recurrent neuro-fuzzy filter (ARNFF) for speech enhancement in noisy environment. The speech enhancement scheme consists of two microphones that receive a primary and a reference input source respectively, and the proposed ARNFF that attenuates the noise corrupting the original speech signal in the primary channel. The ARNFF is a connectionist network that can be translated effortlessly into a set of dynamic fuzzy rules and state-space equations as well. An effective learning algorithm, consisting of a clustering algorithm for the structure learning and a recurrent learning algorithm for the parameter learning, is adopted from our previous research for the ARNNF construction. From our computer simulations and comparisons with some existing filters, the advantages of the proposed ARNFF for noisy speech enhancement include: 1) a more compact filter structure, 2) no a priori knowledge needed for the exact lagged order of the input variables, 3) a better performance in long-delay environment.

AB - This paper presents a novel adaptive recurrent neuro-fuzzy filter (ARNFF) for speech enhancement in noisy environment. The speech enhancement scheme consists of two microphones that receive a primary and a reference input source respectively, and the proposed ARNFF that attenuates the noise corrupting the original speech signal in the primary channel. The ARNFF is a connectionist network that can be translated effortlessly into a set of dynamic fuzzy rules and state-space equations as well. An effective learning algorithm, consisting of a clustering algorithm for the structure learning and a recurrent learning algorithm for the parameter learning, is adopted from our previous research for the ARNNF construction. From our computer simulations and comparisons with some existing filters, the advantages of the proposed ARNFF for noisy speech enhancement include: 1) a more compact filter structure, 2) no a priori knowledge needed for the exact lagged order of the input variables, 3) a better performance in long-delay environment.

UR - http://www.scopus.com/inward/record.url?scp=10944234990&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=10944234990&partnerID=8YFLogxK

M3 - Conference article

AN - SCOPUS:10944234990

VL - 4

SP - 3083

EP - 3088

JO - IEEE International Conference on Neural Networks - Conference Proceedings

JF - IEEE International Conference on Neural Networks - Conference Proceedings

SN - 1098-7576

ER -