Actions speak louder than words: Searching human action video based on body movement

Yan Ching Lin, Min-Chun Hu, Wen Huang Cheng, Yung Huan Hsieh, Hong Ming Chen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

Human action video search is a frequent demand in multimedia applications, and conventional video search schemes based on keywords usually fail to correctly find relevant videos due to noisy video tags. Observing the widespread use of Kinect-like depth cameras, we propose to search human action videos by directly performing the target action with body movements. Human actions are captured by Kinect and the recorded depth information is utilized to measure the similarity between the query action and each human action video in the database. We use representative depth descriptors without learning optimization to achieve real-time and promising performance as compatible as those of the leading methods based on color images and videos. Meanwhile, a large Depth-included Human Action video dataset, namely DHA, is collected to prove the effectiveness of the proposed video search system.

Original languageEnglish
Title of host publicationMM 2012 - Proceedings of the 20th ACM International Conference on Multimedia
Pages1261-1262
Number of pages2
DOIs
Publication statusPublished - 2012 Dec 26
Event20th ACM International Conference on Multimedia, MM 2012 - Nara, Japan
Duration: 2012 Oct 292012 Nov 2

Publication series

NameMM 2012 - Proceedings of the 20th ACM International Conference on Multimedia

Other

Other20th ACM International Conference on Multimedia, MM 2012
CountryJapan
CityNara
Period12-10-2912-11-02

All Science Journal Classification (ASJC) codes

  • Computer Graphics and Computer-Aided Design
  • Human-Computer Interaction
  • Software

Fingerprint Dive into the research topics of 'Actions speak louder than words: Searching human action video based on body movement'. Together they form a unique fingerprint.

Cite this