
Can robots move like athletes? A new training model helps them replicate sports moves, but the results show both progress and unexpected challenges.
![Retargeting Human Video Motions to Robot Motions: (a) Human motions are captured from video. (b) Using TRAM [93], 3D human motion is reconstructed in the SMPL parameter format. (c) A reinforcement learning (RL) policy is trained in simulation to track the SMPL motion. (d) The learned SMPL motion is retargeted to the Unitree G1 humanoid robot in simulation. (e) The trained RL policy is deployed on the real robot, executing the final motion in the physical world. This pipeline ensures the retargeted motions remain physically feasible and suitable for real-world deployment. Credit: arXiv (2025). DOI: 10.48550/arxiv.2502.01143](https://www.electronicsforu.com/wp-contents/uploads/2025/02/new-model-for-training-500x126.jpg)
A team of AI and robotics researchers from Carnegie Mellon University, along with two colleagues from NVIDIA, has created a new model to train robots to move like human athletes. The team observed that most robotic training focuses on locomotion, leading to robots that move efficiently but without fluidity or athleticism. To address this, they explored whole-body training. They found existing models lacked adaptability and relied on too many parameters, making robot movements overly cautious. This led them to develop a new two-stage training framework.
The first stage trains an AI module to analyze whole-body human motion videos, adjusting key movements to fit the robot’s capabilities using motion tracking. The second stage gathers real-world data to bridge the gap between human movement in videos and how robots can physically move. This process led to a framework called Aligning Simulation and Real Physics (ASAP).
The ASAP framework consists of four steps. First, motion tracking pre-training and real trajectory collection involve retargeting humanoid motions from human videos. Multiple motion tracking policies are pre-trained to generate real-world movement trajectories. Next, delta action model training is performed using real-world rollout data. This step minimizes the discrepancy between the simulated state and the actual real-world state, improving the model’s accuracy.
In the policy fine-tuning stage, the delta action model is frozen and integrated into the simulator to better align with real-world physics. The pre-trained motion tracking policy is then fine-tuned for greater precision. Finally, in real-world deployment, the fine-tuned policy is implemented directly in the real world without relying on the delta action model, ensuring the robot can perform its trained movements independently.
To test the framework, the researchers trained a robot to replicate iconic sports moves. It performed Kobe Bryant’s fadeaway jump shot, LeBron James’ Silencer move, and Cristiano Ronaldo’s Siu leap with a mid-air spin. Each movement was recorded.
The robot’s movements clearly resemble the famous sports moves, highlighting progress in full-body motion. However, it’s also evident that much more work is needed before a robot could be mistaken for a professional athlete.
Reference: Tairan He et al, ASAP: Aligning Simulation and Real-World Physics for Learning Agile Humanoid Whole-Body Skills, arXiv (2025). DOI: 10.48550/arxiv.2502.01143