Combined hand gesture - Speech model for human action recognition

Sheng Tzong Cheng, Chih Wei Hsu, Jian Pan Li

Research output: Contribution to journalArticlepeer-review

11 Citations (Scopus)

Abstract

This study proposes a dynamic hand gesture detection technology to effectively detect dynamic hand gesture areas, and a hand gesture recognition technology to improve the dynamic hand gesture recognition rate. Meanwhile, the corresponding relationship between state sequences in hand gesture and speech models is considered by integrating speech recognition technology with a multimodal model, thus improving the accuracy of human behavior recognition. The experimental results proved that the proposed method can effectively improve human behavior recognition accuracy and the feasibility of system applications. Experimental results verified that the multimodal gesture-speech model provided superior accuracy when compared to the single modal versions.

Original languageEnglish
Pages (from-to)17098-17129
Number of pages32
JournalSensors (Switzerland)
Volume13
Issue number12
DOIs
Publication statusPublished - 2013 Dec 12

All Science Journal Classification (ASJC) codes

  • Analytical Chemistry
  • Information Systems
  • Instrumentation
  • Atomic and Molecular Physics, and Optics
  • Electrical and Electronic Engineering
  • Biochemistry

Fingerprint

Dive into the research topics of 'Combined hand gesture - Speech model for human action recognition'. Together they form a unique fingerprint.

Cite this