This work proposes a framework for animation video re?sequencing using deep learning and optimal graph traversal techniques The proposed system produces new animation sequences by reordering a collection of animation images or existing animation video To maintain tem? poral coherence in the generated animation sequences a perceptual distance is utilized so that adjacent frames in the re?sequenced animations are as perceptually similar as possible To measure perceptual distance we extract image features using activations of deep convolu? tional neural networks and learn a perceptual distance by training these activation features on a small network with data comprised of human perceptual judgments With this perceptual metric and graph?based manifold learning techniques the framework can produce smooth and visually appealing animation results for a variety of animation styles In contrast to pre? vious work on animation re?sequencing the proposed framework applies to a broader range of image styles and does not require hand?crafted feature extraction background subtrac? tion or feature correspondence The framework has additional applications to sequencing unstructured collections of images
| Date of Award | 2020 |
|---|
| Original language | English |
|---|
| Supervisor | Tong-Yee Lee (Supervisor) |
|---|
Learning a Perceptual Manifold for Animation Video Resequencing
爾斯, 查. (Author). 2020
Student thesis: Doctoral Thesis