Simulated humanoid robots learn to hike rugged terrain autonomously

2025-09-15 20:38:28 英文原文

作者:by Patricia DeLacey, University of Michigan College of Engineering

Simulated humanoid robots learn to hike rugged terrain autonomously
A new AI framework called LEGO-H trains humanoid robots to hike complex trails, combining visual perception, decision making and motor execution. The robot uses vision to autonomously anticipate short-term goals and guide locomotion along the trail. The bubble size from large to small indicates the anticipated direction while the color shows the order: orange, then green then gray. Credit: Lin and Yu, 2025.

Training humanoid robots to hike could accelerate development of embodied AI for tasks like autonomous search and rescue, ecological monitoring in unexplored places and more, say University of Michigan researchers who developed an AI model that equips humanoids to hit the trails.

With their new AI framework called LEGO-H, the researchers trained simulated, camera-equipped Unitree Robotics humanoids to plan ahead, avoid obstacles, maintain posture and adjust speed and stride to uneven ground.

"Our model is the first that could give a humanoid robot the ability to see, decide and move entirely on its own—not just walking, but hopping, stepping or jumping as the trail demands. Until now, humanoids have mostly been 'blind," dependent on human operators for every movement decision," said Stella Yu, a professor of computer science and engineering and senior author of a study presented at the IEEE Conference on Computer Vision and Pattern Recognition in Nashville in June 2025.

A humanoid robot autonomously moves through a hiking trail using visual perception, decision making and motor execution. Moving through a simulated, geometric hiking trail, the robot takes small steps walking uphill. The robot steps up a short obstacle, then hops back down and continues walking uphill. After taking two more step ups and hops down, the robot comes to a hole in the ground. It pauses, then side steps and moves around the hole and crab walks to the side. Credit: Lin and Yu, 2025

The work is published on the arXiv preprint server.

Traditionally, robots have learned to navigate on flat, unobstructed surfaces using pre-built maps and constant human guidance, with high-level planning ("where to go") and low-level execution ("how to move") treated as separate problems.

"Unifying navigation and locomotion in a single policy learning framework lets the robots develop their own movement strategies based on the situation without any human pre-programming patterns," said Kwan-Yee Lin, a research fellow in computer science and engineering and lead author of the study.

In the simulation, are dropped on an unfamiliar trail and asked to navigate to a specific point. They are equipped with visual input, body awareness and a simple GPS direction—such as "the destination is 0.3 miles northeast"—rather than turn-by-turn directions.

The virtual 6-foot adult-sized and roughly 4-foot kid-sized robots hiked trails of five different types, each with five difficulty levels. Performance was measured on completeness, safety and efficiency.

When compared to robots given perfect navigation and environmental information in advance, the simulated autonomous robots' performance was comparable or better in efficiency and safety. Their built-in body awareness helped prevent damage, and removing that aspect noticeably reduced hiking success, the researchers said.

Virtual autonomous robots learned to adapt their and motion style based on the terrain. For example, when coming to a tight space, the robots learned to lean sideways to squeeze through. They were also able to decide paths based on obstacles—walking around tall obstacles and stepping over lower obstacles, going around if they were unable to step over.

"Amazingly, the virtual robots could regain their balance after a stumble—something not seen in previous humanoids. We didn't program this. It emerged naturally as the robots learned to interact with their environment," said Lin.

For this first study, the robot's upper body was kept fixed because adding upper body movements dramatically increases modeling complexity. Now that this proof-of-concept study worked for leg movement, the research team is working towards full-body coordinated hiking to utilize the robot's full degree of freedom to maximize stability, safety and efficiency in locomotion.

The research team is actively working on adapting these policies to physical humanoids in the real world.

More information: Kwan-Yee Lin et al, Let Humanoids Hike! Integrative Skill Development on Complex Trails, arXiv (2025). DOI: 10.48550/arxiv.2505.06218

Journal information: arXiv

Citation: Simulated humanoid robots learn to hike rugged terrain autonomously (2025, September 15) retrieved 17 September 2025 from https://techxplore.com/news/2025-09-simulated-humanoid-robots-hike-rugged.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

关于《Simulated humanoid robots learn to hike rugged terrain autonomously》的评论


暂无评论

发表评论

摘要

Researchers at the University of Michigan have developed an AI framework called LEGO-H that trains humanoid robots to hike complex trails autonomously. The robot uses visual perception to anticipate short-term goals and navigate obstacles, demonstrating skills such as hopping, stepping, and jumping based on trail conditions. This development could enhance embodied AI for applications like search and rescue and ecological monitoring in unexplored areas. In simulations, the robots showed comparable or better performance in efficiency and safety compared to robots with pre-built environmental information. The research team is now working towards applying these policies to physical humanoids in real-world scenarios.

相关新闻