TY - JOUR
T1 - A Wearable Assistive Listening Device with Immersive Function Using Sensors Fusion Method for the 3-D Space Perception
AU - Du, Yi Chun
AU - Yu, Hsiang Chien
AU - Ciou, Wei Siang
AU - Li, Yi Lu
N1 - Publisher Copyright:
© 2001-2012 IEEE.
PY - 2024/1/15
Y1 - 2024/1/15
N2 - According to related researches, the reasons to users unwilling to wear assistive listening devices (ALDs) is the risk of unable to determine the position of sound. In addition, the ALD proposed in this study can perform 360° scan and determine the position. However, it cannot determine the vertical position of the target, resulting in fluctuating volume. Therefore, this study proposes a new sensors fusion method for space perception by the space perception module on ALD that combines computer vision (CV) technology, dual-layer differential microphone arrays (d-DMA) algorithm, time difference of arrival (TDOA), and mixing algorithm. It is primarily designed for patients with mild-To-moderate hearing loss, and the prototype has been developed. This device enhances the target speech (TS) and adjusts the volume output of dual-channels to achieve an immersive auditory experience through the mixing algorithm. This helps mitigate the risk by the inability to determine the position of sound. Furthermore, this study addresses the issue of fluctuating volume by the d-DMA. Based on the results, the proposed device achieves an image accuracy rate over 94% at a normal conversation distance (<160 cm), with 30° of sound reception range. In addition, the stability of volume output is improved by 60% compared with commercial ALD. Clinical results demonstrate that the device enhances the speech recognition threshold (SRT) by 5.5 dB in quiet environments and 5.8 dB in noisy environments. Finally, participants' satisfaction with the device in both the environments indicates the potential of this device for future commercialization.
AB - According to related researches, the reasons to users unwilling to wear assistive listening devices (ALDs) is the risk of unable to determine the position of sound. In addition, the ALD proposed in this study can perform 360° scan and determine the position. However, it cannot determine the vertical position of the target, resulting in fluctuating volume. Therefore, this study proposes a new sensors fusion method for space perception by the space perception module on ALD that combines computer vision (CV) technology, dual-layer differential microphone arrays (d-DMA) algorithm, time difference of arrival (TDOA), and mixing algorithm. It is primarily designed for patients with mild-To-moderate hearing loss, and the prototype has been developed. This device enhances the target speech (TS) and adjusts the volume output of dual-channels to achieve an immersive auditory experience through the mixing algorithm. This helps mitigate the risk by the inability to determine the position of sound. Furthermore, this study addresses the issue of fluctuating volume by the d-DMA. Based on the results, the proposed device achieves an image accuracy rate over 94% at a normal conversation distance (<160 cm), with 30° of sound reception range. In addition, the stability of volume output is improved by 60% compared with commercial ALD. Clinical results demonstrate that the device enhances the speech recognition threshold (SRT) by 5.5 dB in quiet environments and 5.8 dB in noisy environments. Finally, participants' satisfaction with the device in both the environments indicates the potential of this device for future commercialization.
UR - http://www.scopus.com/inward/record.url?scp=85179788779&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85179788779&partnerID=8YFLogxK
U2 - 10.1109/JSEN.2023.3337663
DO - 10.1109/JSEN.2023.3337663
M3 - Article
AN - SCOPUS:85179788779
SN - 1530-437X
VL - 24
SP - 2108
EP - 2117
JO - IEEE Sensors Journal
JF - IEEE Sensors Journal
IS - 2
ER -