Learning Motion Controllers with Adaptive Depth Perception
 

We present a novel approach to real-time character animation that allows a character to move autonomously based on vision input. By allowing the character to “see” the environment directly using depth perception, we can skip the manual design phase of parameterizing the state space in a reinforcement learning framework. In previous work, this is done manually since finding a minimal set of parameters for describing a character's environment is crucial for efficient learning. Learning from raw vision input, however, suffers from the “curse of dimensionality”, which we avoid by introducing a hierarchical state model and a novel regression algorithm. We demonstrate that our controllers allow a character to navigate and survive in environments containing arbitrarily shaped obstacles, which is hard to achieve with existing reinforcement learning frameworks.

Abstract

Paper

Project Members

Learning Motion Controllers with Adaptive Depth Perception

Wan-Yen Lo, Claude Knaus, and Matthias Zwicker

ACM SIGGRAPH / Eurographics Symposium on Computer Animation, 2012

Bibtex

@inproceedings{LZ:ADP:2012,
author = {Wan-Yen Lo, Claude Knaus, and Matthias Zwicker},
title = {Learning Motion Controllers with Adaptive Depth Perception},
booktitle = {Proceedings of the 2012 ACM/Eurographics Symposium on Computer Animation},
year = {2012},
month = {July}
}