Automatic video region-of-interest determination based on user attention model

Wen Huang Cheng, Wei Ta Chu, Jin Hau Kuo, Ja Ling Wu

Research output: Contribution to journalConference article

27 Citations (Scopus)

Abstract

This paper presents a framework for automatic video region-of-interest determination based on user attention model. In this work, a set of attempts on using video attention features and knowledge of applied media aesthetics are made. Three types of visual attention features we used are intensity, color, and motion. Referring to aesthetic principles, these features are combined according to camera motion types on the basis of a newly proposed video analysis unit, framesegment. We conduct subjective experiments on several kinds of video data and demonstrate the effectiveness of the proposed framework.

Original languageEnglish
Article number1465313
Pages (from-to)3219-3222
Number of pages4
JournalProceedings - IEEE International Symposium on Circuits and Systems
DOIs
Publication statusPublished - 2005 Dec 1
EventIEEE International Symposium on Circuits and Systems 2005, ISCAS 2005 - Kobe, Japan
Duration: 2005 May 232005 May 26

Fingerprint

Cameras
Color
Experiments

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering

Cite this

@article{cb9d15991d10430294e207675de1c56d,
title = "Automatic video region-of-interest determination based on user attention model",
abstract = "This paper presents a framework for automatic video region-of-interest determination based on user attention model. In this work, a set of attempts on using video attention features and knowledge of applied media aesthetics are made. Three types of visual attention features we used are intensity, color, and motion. Referring to aesthetic principles, these features are combined according to camera motion types on the basis of a newly proposed video analysis unit, framesegment. We conduct subjective experiments on several kinds of video data and demonstrate the effectiveness of the proposed framework.",
author = "Cheng, {Wen Huang} and Chu, {Wei Ta} and Kuo, {Jin Hau} and Wu, {Ja Ling}",
year = "2005",
month = "12",
day = "1",
doi = "10.1109/ISCAS.2005.1465313",
language = "English",
pages = "3219--3222",
journal = "Proceedings - IEEE International Symposium on Circuits and Systems",
issn = "0271-4310",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

Automatic video region-of-interest determination based on user attention model. / Cheng, Wen Huang; Chu, Wei Ta; Kuo, Jin Hau; Wu, Ja Ling.

In: Proceedings - IEEE International Symposium on Circuits and Systems, 01.12.2005, p. 3219-3222.

Research output: Contribution to journalConference article

TY - JOUR

T1 - Automatic video region-of-interest determination based on user attention model

AU - Cheng, Wen Huang

AU - Chu, Wei Ta

AU - Kuo, Jin Hau

AU - Wu, Ja Ling

PY - 2005/12/1

Y1 - 2005/12/1

N2 - This paper presents a framework for automatic video region-of-interest determination based on user attention model. In this work, a set of attempts on using video attention features and knowledge of applied media aesthetics are made. Three types of visual attention features we used are intensity, color, and motion. Referring to aesthetic principles, these features are combined according to camera motion types on the basis of a newly proposed video analysis unit, framesegment. We conduct subjective experiments on several kinds of video data and demonstrate the effectiveness of the proposed framework.

AB - This paper presents a framework for automatic video region-of-interest determination based on user attention model. In this work, a set of attempts on using video attention features and knowledge of applied media aesthetics are made. Three types of visual attention features we used are intensity, color, and motion. Referring to aesthetic principles, these features are combined according to camera motion types on the basis of a newly proposed video analysis unit, framesegment. We conduct subjective experiments on several kinds of video data and demonstrate the effectiveness of the proposed framework.

UR - http://www.scopus.com/inward/record.url?scp=40949085188&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=40949085188&partnerID=8YFLogxK

U2 - 10.1109/ISCAS.2005.1465313

DO - 10.1109/ISCAS.2005.1465313

M3 - Conference article

AN - SCOPUS:40949085188

SP - 3219

EP - 3222

JO - Proceedings - IEEE International Symposium on Circuits and Systems

JF - Proceedings - IEEE International Symposium on Circuits and Systems

SN - 0271-4310

M1 - 1465313

ER -