Teaching Bipedal Robots to Step Across Discrete Terrain
UC Berkeley and CMU researchers demonstrate dynamic walking on stepping stones
Like humans and most terrestrial animals, legged robots need the ability to move over rugged terrain to be useful in applications such as disaster response and search and rescue. However, designing control algorithms that can handle discrete footholds (like rubble or stepping stones) is challenging, because there are strict constraints on foot placement that can’t be violated and the motion of these systems are governed by complex dynamical equations.
By leveraging recent advances in optimal and nonlinear control systems, our labs at the University of California, Berkeley, and Carnegie Mellon University have demonstrated dynamic walking on stepping stones on the ATRIAS robot, even when the distance between the stones as well as their height is varied randomly.
Why Legged Robots?
Legged robots are incredible machines capable of navigating over unstructured and uneven terrain. They’re much more versatile than their wheeled counterparts, which have a hard time navigating terrain with gaps or significant changes in height. The capability of bipedal robots to traverse over discrete and unpredictable terrain makes them ideal candidates for applications such as space exploration, disaster response, and as personal robots in urban settings that require robots to locomote over discrete terrain designed for humans such as stairways or stepping stones.
In fact, the “W” Prize, aimed at challenging the state-of-the-art in machine locomotion, also includes tasks such as negotiating a series of stepping stones and staircases. While legged robots have seen significant improvements in their mechanical design and control strategies over the years, they are still far from being deployed out in the real world, as seen in the 2015 DARPA Robotics Challenge. Current state-of-the-art robots are slow with quasi-static motions, not robust to unexpected disturbances and are inefficient in terms of their energy usage.
Traversing discrete terrain is also a challenging problem for lower-limb exoskeletons, which currently require additional balancing mechanisms by the user, like crutches, and do not allow for hands-free dynamic walking. This was exemplified by the recent exoskeleton race at the Cybathlon competition that involved walking over a series of stepping stones. By designing robots and feedback algorithms that can achieve precise footstep placement over complex terrain in a safe and reliable manner, we can enable new robotic applications and also translate these ideas to biomechatronic devices that augment humans.
Dynamic Walking on Stepping Stones
At the Hybrid Robotics Group at UC Berkeley (formerly at Carnegie Mellon), we have been working on developing formal control frameworks for high degree-of-freedom bipedal robots that not only guarantee precise footstep placement over discrete terrain, but are also robust to model uncertainties and external forces. These methods are independent of the specific robot itself, and have been tested (in simulation) on models of a variety of robots including RABBIT, ATRIAS and DURUS.
Moreover, these robots do not “know” what the terrain will be like ahead of time; only the next step location is shown to the robot, a scenario that closely represents what a robot might encounter in the real world. We’ve also experimentally tested our control algorithms on the ATRIAS bipedal robot platform, and were able to achieve dynamic walking over stochastically varying discrete terrain with step lengths varying between 30 and 65 centimeters and step heights that require a step up/down of up to 22 cm, while maintaining an average walking speed of 0.6 m/s.
We believe that this is the first time that dynamic walking on stepping stones with simultaneous variation in step length and step height has been successfully demonstrated on a bipedal robot.
Why is discrete walking like this such a hard problem in robotics? To begin, bipedal robots are high degree-of-freedom systems whose motion is governed by complex nonlinear differential equations that capture the hybrid dynamics of the ground interaction: The robot must interact with the environment by constantly making and breaking contact with its surroundings.
Furthermore, the robots we work with, like ATRIAS, are underactuated, meaning that there are no actuators at the ankle, just fixed-point feet. To give you a sense of how hard this is, imagine stepping across stepping stones or climbing a flight of stairs while on stilts: the only way to maintain balance is to keep taking steps. The problem of stepping stones also places strict constraints on foot placement, and of course in the real world, these stepping stones could also topple over (a problem we will be working on in the near future). In addition, the robot must work within other physical constraints such as motor torque limits and friction (the robot must not slip). All these constraints may act against each other, making the control design process nontrivial.
The stepping-stones problem has been widely studied, with some truly impressive results on robots such as Valkyrie and ATLAS. But what’s different about our methods is that they allow for dynamic walking as opposed to the slower quasi-static motions that robots tend to use. By reasoning about the nonlinearities in the dynamics of the system and by taking advantage of recent advances in optimal and nonlinear control technology, we can specify control objectives and desired robot behaviors in a simple and compact form while providing formal stability and safety guarantees. This means our robots can walk over discrete terrain without slipping or falling over, backed by some neat math and some cool experimental videos.
Our robots are currently “blind,” and we need to give them information about its surroundings, such as the location of the next stepping stone. We are now working on integrating computer vision algorithms including depth segmentation and deep learning with our controllers. This would allow the robot to reason about its surrounding environment and lead to the development of a completely autonomous system. With a new robot, Cassie, soon to arrive at Berkeley, we plan to extend our experimental results to 3D walking over real-world stepping-stones.
In the long run, this research will help enable bipedal robots to autonomously navigate over rough terrain, both in indoor environments (like stairs and narrow hallways) as well as outdoor environments (like wooded paths). The key components of our research include safety, robustness, and agility i.e. we want our robots to be able to step in the “correct” locations so as to prevent them from falling, while being robust to unexpected forces and disturbances.
The potential applications of such a technology are numerous: In search and rescue, where autonomous humanoid robots can be deployed instead of human rescuers; in exploration of unmapped/unexplored areas such as in other planets, where the surface may be highly uneven or as personal robots at homes. In addition, the methods we develop for bipedal robots can also be translated to robotic devices that augment humans, such as lower limb exoskeletons.
Ayush Agrawal is a Ph.D. student working on bipedal robots and exoskeletons in UC Berkeley’s Hybrid Robotics Group led by Professor Koushil Sreenath
. Quan Nguyen just finished his Ph.D. at Carnegie Mellon on robust control of bipedal robots walking on discrete terrain. They are lead authors on a paper on dynamic bipedal locomotion over stochastic discrete terrain submitted to International Journal of Robotics Research.