Jingxiang Mo
Henri Lemoine
We present the initial work-in-progress development of an open-source, low-cost humanoid robot platform designed for learning-based control algorithms. We built and designed this low cost humanoid from scratch in CAD, uploaded into Isaac Gym, trained a PPO policy, 3D printed and assembled, and then currently actively working on sim-to-real tasks. Our preliminary work focuses on implementing the robot in CAD, designing the electronics diagrams, setting up a minimal PPO environment in Isaac Gym based on the Humanoid Gym library, and attempting sim-to-sim evaluation. Our goal is to build and deploy a large amount (20-30) of small humanoid robots to the physical world and create an affordable open-source platform for humanoid research and competitions. The robot design is inspired by Robotis OP3, while the initiative is inspired by Alex Koch's robot arms. This progress report presents our progress towards creating an accessible platform for humanoid robotics research, development, and deployment with future work aimed at refining the hardware design and improving the learning pipeline for real-world deployment.
There’s a strong need for small and open-source humanoid robots that are designed to enable fast deployment of learning-based policies, cheap, and with easily replaceable parts, yet able to perform dynamic motions.
In this initial report, we present our progress in designing and simulating this humanoid platform. We also outline our strategies for addressing the sim-to-real gap, including careful system identification and domain randomization techniques. By sharing our design process, simulation framework, and initial results, we aim make humanoid robotics research more accessible and accelerating progress in the field.
Humanoid robotics research covers a lot of different designs that vary in size, type of actuators, and learning methods. Full-scale humanoids like TORO, LOLA, and WALK-MAN are focused on human-sized tasks, while mid-scale robots like the MIT Humanoid and Unitree G1 are more about agility and flexibility. Recent work by Berkeley (like the Berkeley Humanoid) has made some progress in mid-sized humanoid robots using learning-based approaches.
Disney Research has also worked on bipedal robots, like in "Design and Control of a Bipedal Robotic Character" and "VMP" which focus on dynamic character control. DeepMind's soccer-playing robots show advanced policy learning, and the "Humanoid Gym" environment uses reinforcement learning to get robots from simulation to real-world tasks without much extra tuning.
Our project builds on earlier work with open-source miniature humanoids, focusing on low-cost and customizable designs. We're inspired by the Robotis OP3 and Alex Koch's low-cost robot arm, and we're trying to make humanoid robots more accessible so people can quickly prototype and deploy small humanoid robots using learning-based control.
The core design principles for this humanoid robot are sim2real friendliness, reliable and low-cost, and customizable.
Zero-Shot Sim-to-Real Friendly