Maria Stamatopoulou*, Jeffrey Li*, Dimitrios Kanoulas
Robot Perception Lab, Department of Computer Science,
University College London (UCL), United Kingdom
We present SDS ("See it. Do it. Sorted."), a novel pipeline for intuitive quadrupedal skill learning from a single demonstration video. Leveraging the Visual capabilities of GPT-4o, SDS processes input videos through our novel chain-of-thought promoting technique (SUS) and generates executable reward functions (RFs) that drive the imitation of locomotion skills, through learning a Proximal Policy Optimization (PPO)-based Reinforcement Learning (RL) policy, using environment information from the NVIDIA IsaacGym simulator. SDS autonomously evaluates the RFs by monitoring the individual reward components and supplying training footage and fitness metrics back into GPT-4o, which is then prompted to evolve the RFs to achieve higher task fitness at each iteration. We validate our method on the Unitree Go1 robot, demonstrating its ability to execute variable skills such as trotting, bounding, pacing and hopping, achieving high imitation fidelity and locomotion stability. SDS shows improvements over SOTA methods in task adaptability, reduced dependence on domain-specific knowledge, and bypassing the need for labor-intensive reward engineering and large-scale training datasets.
We deploy all policies on the on-board computer of a Unitree Go1 robot, using a Docker architecture. We apply base tracking to verify the stability of the locomotion policies.
We demonstrate SDS’s ability to generalize across platforms by successfully deploying policies on an ANYmal-D quadruped with inverted joint configuration.