High Resolution Animated Scenes from Stills(2011)

Note: Please Scroll Down to See the Download Link.

ABSTRACT:

Current techniques for generating animated scenes involve either videos (whose resolution is limited) or a single image (which requires a significant amount of user interaction). In this project, we describe a system that allows the user to quickly and easily produce a compelling-looking animation from a small collection of high resolution stills. Our system has two unique features. First, it applies an automatic partial temporal order recovery algorithm to the stills in order to approximate the original scene dynamics. The output sequence is subsequently extracted using a second-order Markov Chain model. Second, a region with large motion variation can be automatically decomposed into semiautonomous regions such that their temporal orderings are softly constrained. This is to ensure motion smoothness throughout the original region. The final animation is obtained by frame interpolation and feathering. Our system also provides a simple-to-use interface to help the user to fine-tune the motion of the animated scene. Using our system, an animated scene can be generated in minutes. We show results for a variety of scenes.

Project Introduction

A single picture conveys a lot of information about the scene, but it rarely conveys the scene’s true dynamic nature. A video effectively does both but is limited in resolution. Off-the-shelf camcorders can capture videos with a resolution of 720 _ 480 at 30 fps, but this resolution pales in comparison to those for consumer digital cameras, whose resolution can be as high as 16 MPixels. What if we wish to produce a high resolution animated scene that reasonably reflects the true dynamic nature of the scene? Video textures are the perfect solution for producing arbitrarily long video sequences—if only very high resolution camcorders exist.

Overview

Our system is capable of generating compelling-looking animated scenes, but there is a major drawback: Their system requires a considerable amount of manual input. Furthermore, since the animation is specified completely manually, it might not reflect the true scene dynamics. We use a different tack that bridges video textures and system: We use as input a small collection of high resolution stills that (under-)samples the dynamic scene. This collection has both the benefit of the high resolution and some indication of the dynamic nature of the scene (assuming that the scene has some degree of regularity in motion). We are also motivated by a need for a more practical solution that allows the user to easily generate the animated scene. In this paper, we describe a scene animation system that can easily generate a video or video texture from a small collection of stills (typically, 10 to 20 stills are captured within 1 to 2 minutes, depending on the complexity of the scene motion). Our system first builds a graph that links similar images. It then recovers partial temporal orders among the input images and uses a second-order Markov Chain model to generate an image sequence of the video or video texture.

Our system is designed to allow the user to easily fine-tune the animation. For example, the user has the option to manually specify regions where animation occurs independently (which we term independent animated regions (IAR)) so that different time instances of each IAR can be used independently. An IAR with large motion variation can further be automatically decomposed into semi-independent animated regions (SIARs) in order to make the motion appear more natural. The user also has the option to modify the dynamics (e.g., speed up or slow down the motion, or choose different motion parameters) through a simple interface. Finally, all regions are frame interpolated and feathered at their boundaries to produce the final animation. The user needs only a few minutes of interaction to finish the whole process. In our work, we limit our scope to quasi-periodic motion, i.e., dynamic textures. There are two key features of our system. One is the automatic partial temporal order recovery. This recovery algorithm is critical because the original capture order typically does not reflect the true dynamics due to temporal under sampling.

As a result, the input images would typically have to be sorted. The recovery algorithm automatically suggests orders for subsets of stills. These recovered partial orders provide reference dynamics to the animation. The other feature is its ability to automatically decompose an IAR into SIARs when the user requests and treat the interdependence among the SIARs. IAR decomposition can greatly reduce the dependence among the temporal orderings of local samples if the IAR has significant motion variation that results in unsatisfactory animation. Our system then finds the optimal processing order among the SIARs and imposes soft constraints to maintain motion smoothness among the SIARs.

Proposed System

  • The proposed system is a scene animation system that can easily generate a video or video texture from a small collection of stills.
  • Our system first builds a graph that links similar images. It then recovers partial temporal orders among the input images and uses a second-order Markov Chain model to generate an image sequence of the video or video texture. Our system is designed to allow the user to easily fine-tune the animation.

Modules and its Description:

This project contains 6 modules. They are:

  1. Preprocessing
  2. Building a graph
  3. Motion creation
  4. Layer based approach
  5. Scene making
  6. Manual editing

Preprocessing

Color Enhancement, Improve the image quality, Size corrections, and noise removal.

Algorithm: Morphological Filters, Automatic Color Enhancement technique [ACE]

Building a Graph

•         Image comparison.

•         Algorithm: Floyd’s algorithm, Partial Temporal order recovery algorithm.

Motion Creation

•         Based on distance low resolution optical flow is created between two adjacent images. Sampling the graph

•         Algorithm: Statistical approach, Markov Chain Model.

Layer Based Approach

•         Feathering Techniques, Motion creation

•         Algorithm: Baysian Matting, Canny edge detection.

Scene Making

•         Video texture creations

•         AVI File conversion based on time sequence

•         Frame interpolation, Algorithm :AVI format

Manual Editing

•         Manual reorders

•         Editing the image

•         Motion smoothness

•         Measuring motion irregularity.

Software/ Hardware Requirements

Hardware Requirements

• SYSTEM                    : Pentium IV 2.4 GHz

• HARD DISK              : 40 GB

• RAM                           : 512 MB

Software Requirements

• Operating system     : Windows XP Professional

• Technology               : Microsoft Visual Studio .Net 2008

• Coding Language   : C#

Click here to download High Resolution Animated Scenes from Stills(2011) source code