A near-duplicate video retrieval method based on Zernike moments

Tang You Chang, Shen Chuan Tai, Guo Shiang Lin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Citations (Scopus)

Abstract

In this paper, a near-duplicate video retrieval method developed based on invariant features was proposed. After shot change detection, Zernike moments are extracted from each key-frame of videos as invariant features. We obtain the key-frame similarity by computing the difference of Zernike moments between key-frames of the query and test videos. To achieve near-duplicate video retrieval, each key-frame is considered as an individual sensor and then evaluating all of key-frames is considered as multiple sensors. The results of key-frames are fused to obtain a better performance of near-duplicate video retrieval. The experimental results show that the proposed method can not only find the relevant videos effectively but also resist to the possible modifications such as re-scaling and logo insertion.

Original languageEnglish
Title of host publication2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2015
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages860-864
Number of pages5
ISBN (Electronic)9789881476807
DOIs
Publication statusPublished - 2016 Feb 19
Event2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2015 - Hong Kong, Hong Kong
Duration: 2015 Dec 162015 Dec 19

Publication series

Name2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2015

Other

Other2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, APSIPA ASC 2015
CountryHong Kong
CityHong Kong
Period15-12-1615-12-19

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • Modelling and Simulation
  • Signal Processing

Fingerprint Dive into the research topics of 'A near-duplicate video retrieval method based on Zernike moments'. Together they form a unique fingerprint.

Cite this