Unsupervised alignment of news video and text using visual patterns and textual concepts

Jun Bin Yeh, Chung-Hsien Wu, Sheng Xiong Chang

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)


A brief preview of a news video can be generated by semantically aligning the textual sentences of the anchor report, summarized by the anchor, with the visual field shots. Since accurately detecting the object in a visual shot is difficult and a textual term may generally correspond to several synonyms, the alignment of an anchor sentence with a video shot remains challenging. In this study, the temporal relation among the frames in a visual shot is characterized by a visual language model. The language model-based temporal relation is then applied to sentence-based alignment. The bag-of-word representations for the main objects in the key frames of a visual shot are firstly mapped to the visual patterns trained from the news video database. Furthermore, the textual terms in the report sentence are mapped to the textual concepts that are obtained from the HowNet knowledge base. Finally, unsupervised alignment between the textual concepts and the visual patterns in the news videos is performed using the IBM model-1. For evaluation, the visual pattern language model yields an alignment score of 0.77, exceeding that, 0.66, from the DTW method. Considering the performance for different news categories, visual pattern discovery and textual concept discovery can indeed improve the alignment performance in most news categories.

Original languageEnglish
Article number5657260
Pages (from-to)206-215
Number of pages10
JournalIEEE Transactions on Multimedia
Issue number2
Publication statusPublished - 2011 Apr 1

All Science Journal Classification (ASJC) codes

  • Signal Processing
  • Media Technology
  • Computer Science Applications
  • Electrical and Electronic Engineering


Dive into the research topics of 'Unsupervised alignment of news video and text using visual patterns and textual concepts'. Together they form a unique fingerprint.

Cite this