Robot masters human balancing act

Engineers from the Cockrell School of Engineering at the University of Texas at Austin have successfully demonstrated a novel approach to human-like balance in a biped robot.

When walking in a crowded place, humans typically aren't thinking about how we avoid bumping into one another. We are built to use a gamut of complex skill sets required to execute these types of seemingly simple motions.

Now, thanks to the UT Austin researchers, robots may soon be able to experience similar functionality. Luis Sentis, associate professor in the Department of Aerospace Engineering and Engineering Mechanics, and his team in the Human Centered Robotics Laboratory have successfully demonstrated a novel approach to human-like balance in a biped robot.

By translating a key human physical dynamic skill – maintaining whole-body balance – into a mathematical equation, the team was able to use the numerical formula to program their robot, Mercury, which was built and tested over the course of six years. They calculated the margin of error necessary for the average person to lose one's balance and fall when walking to be a simple figure: just 2cm.

“Essentially, we have developed a technique to teach autonomous robots how to maintain balance even when they are hit unexpectedly, or a force is applied without warning,” Sentis said. “This is a particularly valuable skill we as humans frequently use when navigating through large crowds.”

Sentis said their technique has been successful in dynamically balancing both bipeds without ankle control and full humanoid robots.

Dynamic human-body-like movement is far harder to achieve for a robot without ankle control than for one equipped with actuated, or jointed, feet. So, the UT Austin team used an efficient whole-body controller developed by integrating contact-consistent rotators that can effectively send and receive data to inform the robot as to the best possible move to make next in response to a collision. They also applied a mathematical technique – often used in 3D animation to achieve realistic-looking movements from animated characters – known as inverse kinematics, along with low-level motor position controllers.

Mercury may have been tailored to the specific needs of its creators, but the fundamental equations underpinning this technique in our understanding of human locomotion are, in theory, universally applicable to any comparable embodied AI and robotics research.

“We choose to mimic human movement and physical form in our lab because I believe AI designed to be similar to humans gives the technology greater familiarity,” Sentis said. “This, in turn, will make us more comfortable with robotic behaviour, and the more we can relate, the easier it will be to recognise just how much potential AI has to enhance our lives.”