Document Type
Article
Publication Date
2011
Published In
International Journal Of Robotics Research
Abstract
We present a novel approach to legged locomotion over rough terrain that is thoroughly rooted in optimization. This approach relies on a hierarchy of fast, anytime algorithms to plan a set of footholds, along with the dynamic body motions required to execute them. Components within the planning framework coordinate to exchange plans, cost-to-go estimates, and 'certificates' that ensure the output of an abstract high-level planner can be realized by lower layers of the hierarchy. The burden of careful engineering of cost functions to achieve desired performance is substantially mitigated by a simple inverse optimal control technique. Robustness is achieved by real-time re-planning of the full trajectory, augmented by reflexes and feedback control. We demonstrate the successful application of our approach in guiding the LittleDog quadruped robot over a variety of types of rough terrain. Other novel aspects of our past research efforts include a variety of pioneering inverse optimal control techniques as well as a system for planning using arbitrary pre-recorded robot behavior.
Recommended Citation
Matthew A. Zucker, N. Ratliff, M. Stolle, J. Chestnutt, J. A. Bagnell, C. G. Atkeson, and J. Kuffner.
(2011).
"Optimization And Learning For Rough Terrain Legged Locomotion".
International Journal Of Robotics Research.
Volume 30,
Issue 2.
175-191.
DOI: 10.1177/0278364910392608
https://works.swarthmore.edu/fac-engineering/30
Comments
This work is a preprint that is freely available courtesy of SAGE Publications.