Leslie Kanani Michiko Ikemoto

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2007-54

May 15, 2007

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-54.pdf

This thesis describes methods for automating the repetitive parts of character animation using semi-supervised learning algorithms. In our framework, we observe the output the artist would like for given inputs. Using these observations as training data, we fit input-output mapping functions that can generalize the training data to novel input. The artist can provide feedback by editing the output. The system uses this feedback to refine its mapping function. This iterative process continues until the artist is satisfied.

We apply this framework to three important character animation problems. First, sliding foot plants are a common artifact resulting from almost any attempt to modify character motion. We describe an on-line method for fixing this artifact that requires no manual clean-up. Using an artist-trained oracle, we demonstrate that we can accurately annotate animation sequences with foot plant markers. We then use an off-the-shelf inverse kinematics solver to determine the position of each foot plant.

Second, to our knowledge, all motion synthesis algorithms sometimes produce character animation that looks unnatural (i.e., contains artifacts or is otherwise distinguishable as synthesized motion by a human observer). We describe methods for automatically evaluating synthesized character animation. We demonstrate that we can successfully use these methods as the testing parts of hypothesize-and-test motion synthesis algorithms. Our first method uses an SVM-based classifier trained on joint position data. We discovered that it is difficult to train a reliable classifier using this feature space, because natural and unnatural looking motion can lie close together. In our follow-on work, we discovered that using features known to be perceptually important to human observers yields a classifier that is more reliable than the current state-of-the-art.

Third, artists can create compelling character animations by manipulating the details of a character's motion. This process is labor-intensive and repetitive. We show that we can make character animation more efficient yet still controllable by generalizing the edits an animator makes on short sequences of training data to other sequences. Using Gaussian process models, our method predicts the pose and dynamics of the character at each frame (i.e., time instance in the animation), then combines these estimates using probabilistic inference. Our method can be used to edit motion for an existing character, or it can be used to map motions from a control character to a very different target character. Finally, we present data from interviews with professional animators which suggest that generalizing edits can save artists significant time and work.

Advisors: David Forsyth


BibTeX citation:

@phdthesis{Ikemoto:EECS-2007-54,
    Author= {Ikemoto, Leslie Kanani Michiko},
    Title= {Hybrid Artist- and Data-driven Techniques for Character Animation},
    School= {EECS Department, University of California, Berkeley},
    Year= {2007},
    Month= {May},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-54.html},
    Number= {UCB/EECS-2007-54},
    Abstract= {This thesis describes methods for automating the repetitive parts of character animation using semi-supervised learning algorithms.  In our framework, we observe the output the artist would like for given inputs. Using these observations as training data, we fit input-output mapping functions that can generalize the training data to novel input.  The artist can provide feedback by editing the output.  The system uses this feedback to refine its mapping function.  This iterative process continues until the artist is satisfied.

We apply this framework to three important character animation problems. First, sliding foot plants are a common artifact resulting from almost any attempt to modify character motion.  We describe an on-line method for fixing this artifact that requires no manual clean-up.  Using an artist-trained oracle, we demonstrate that we can accurately annotate animation sequences with foot plant markers.  We then use an off-the-shelf inverse kinematics solver to determine the position of each foot plant.

Second, to our knowledge, all motion synthesis algorithms sometimes produce character animation that looks unnatural (i.e., contains artifacts or is otherwise distinguishable as synthesized motion by a human observer).  We describe methods for automatically evaluating synthesized character animation.  We demonstrate that we can successfully use these methods as the testing parts of hypothesize-and-test motion synthesis algorithms.  Our first method uses an SVM-based classifier trained on joint position data. We discovered that it is difficult to train a reliable classifier using this feature space, because natural and unnatural looking motion can lie close together.  In our follow-on work, we discovered that using features known to be perceptually important to human observers yields a classifier that is more reliable than the current state-of-the-art.

Third, artists can create compelling character animations by manipulating the details of a character's motion.  This process is labor-intensive and repetitive.  We show that we can make character animation more efficient yet still controllable by generalizing the edits an animator makes on short sequences of training data to other sequences.  Using Gaussian process models, our method predicts the pose and dynamics of the character at each frame (i.e., time instance in the animation), then combines these estimates using probabilistic inference.  Our method can be used to edit motion for an existing character, or it can be used to map motions from a control character to a very different target character.  Finally, we present data from interviews with professional animators which suggest that generalizing edits can save artists significant time and work.},
}

EndNote citation:

%0 Thesis
%A Ikemoto, Leslie Kanani Michiko 
%T Hybrid Artist- and Data-driven Techniques for Character Animation
%I EECS Department, University of California, Berkeley
%D 2007
%8 May 15
%@ UCB/EECS-2007-54
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-54.html
%F Ikemoto:EECS-2007-54