arXiv link

Abstract: As both legged robots and embedded compute have become more capable, researchers have started to focus on field deployment of these robots. Robust autonomy in unstructured environments requires perception of the world around the robot in order to avoid hazards. However, incorporating perception online while maintaining agile motion is more challenging for legged robots than other mobile robots due to the complex planners and controllers required to handle the dynamics of locomotion. This report will compare three recent approaches for perceptive locomotion and discuss the different ways in which vision can be used to enable legged autonomy.