Abstract
Temporal coherence is crucial in content-aware video retargeting. To date, this problem has been addressed by constraining temporally adjacent pixels to be transformed coherently. However, due to the motion-oblivious nature of this simple constraint, the retargeted videos often exhibit flickering or waving artifacts, especially when significant camera or object motions are involved. Since the feature correspondence across frames varies spatially with both camera and object motion, motion-aware treatment of features is required for video resizing. This motivated us to align consecutive frames by estimating interframe camera motion and to constrain relative positions in the aligned frames. To preserve object motion, we detect distinct moving areas of objects across multiple frames and constrain each of them to be resized consistently. We build a complete video resizing framework by incorporating our motion-aware constraints with an adaptation of the scale-and-stretch optimization recently proposed by Wang and colleagues. Our streaming implementation of the framework allows efficient resizing of long video sequences with low memory cost. Experiments demonstrate that our method produces spatiotemporally coherent retargeting results even for challenging examples with complex camera and object motion, which are difficult to handle with previous techniques.
Original language | English |
---|---|
Title of host publication | Proceedings of ACM SIGGRAPH Asia 2009, SIGGRAPH Asia '09 |
Volume | 28 |
Edition | 5 |
DOIs | |
Publication status | Published - 2009 Dec 1 |
Event | ACM SIGGRAPH Asia 2009, SIGGRAPH Asia '09 - Yokohama, Japan Duration: 2009 Dec 16 → 2009 Dec 19 |
Other
Other | ACM SIGGRAPH Asia 2009, SIGGRAPH Asia '09 |
---|---|
Country | Japan |
City | Yokohama |
Period | 09-12-16 → 09-12-19 |
Fingerprint
All Science Journal Classification (ASJC) codes
- Computer Graphics and Computer-Aided Design
Cite this
}
Motion-aware temporal coherence for video resizing. / Wang, Yu Shuen; Fu, Hongbo; Sorkine, Olga; Lee, Tong-Yee; Seidel, Hans Peter.
Proceedings of ACM SIGGRAPH Asia 2009, SIGGRAPH Asia '09. Vol. 28 5. ed. 2009.Research output: Chapter in Book/Report/Conference proceeding › Conference contribution
TY - GEN
T1 - Motion-aware temporal coherence for video resizing
AU - Wang, Yu Shuen
AU - Fu, Hongbo
AU - Sorkine, Olga
AU - Lee, Tong-Yee
AU - Seidel, Hans Peter
PY - 2009/12/1
Y1 - 2009/12/1
N2 - Temporal coherence is crucial in content-aware video retargeting. To date, this problem has been addressed by constraining temporally adjacent pixels to be transformed coherently. However, due to the motion-oblivious nature of this simple constraint, the retargeted videos often exhibit flickering or waving artifacts, especially when significant camera or object motions are involved. Since the feature correspondence across frames varies spatially with both camera and object motion, motion-aware treatment of features is required for video resizing. This motivated us to align consecutive frames by estimating interframe camera motion and to constrain relative positions in the aligned frames. To preserve object motion, we detect distinct moving areas of objects across multiple frames and constrain each of them to be resized consistently. We build a complete video resizing framework by incorporating our motion-aware constraints with an adaptation of the scale-and-stretch optimization recently proposed by Wang and colleagues. Our streaming implementation of the framework allows efficient resizing of long video sequences with low memory cost. Experiments demonstrate that our method produces spatiotemporally coherent retargeting results even for challenging examples with complex camera and object motion, which are difficult to handle with previous techniques.
AB - Temporal coherence is crucial in content-aware video retargeting. To date, this problem has been addressed by constraining temporally adjacent pixels to be transformed coherently. However, due to the motion-oblivious nature of this simple constraint, the retargeted videos often exhibit flickering or waving artifacts, especially when significant camera or object motions are involved. Since the feature correspondence across frames varies spatially with both camera and object motion, motion-aware treatment of features is required for video resizing. This motivated us to align consecutive frames by estimating interframe camera motion and to constrain relative positions in the aligned frames. To preserve object motion, we detect distinct moving areas of objects across multiple frames and constrain each of them to be resized consistently. We build a complete video resizing framework by incorporating our motion-aware constraints with an adaptation of the scale-and-stretch optimization recently proposed by Wang and colleagues. Our streaming implementation of the framework allows efficient resizing of long video sequences with low memory cost. Experiments demonstrate that our method produces spatiotemporally coherent retargeting results even for challenging examples with complex camera and object motion, which are difficult to handle with previous techniques.
UR - http://www.scopus.com/inward/record.url?scp=77749264947&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=77749264947&partnerID=8YFLogxK
U2 - 10.1145/1661412.1618473
DO - 10.1145/1661412.1618473
M3 - Conference contribution
AN - SCOPUS:77749264947
SN - 9781605588582
VL - 28
BT - Proceedings of ACM SIGGRAPH Asia 2009, SIGGRAPH Asia '09
ER -