RLingua: Improving Reinforcement Learning Sample Efficiency in Robotic Manipulations With Large Language Models

1Baidu RAL, 2Georgia Institute of Technology

RLingua combines LLM knowledge and RL for high-performance control

Video

Abstract

Reinforcement learning (RL) has demonstrated its capability in solving various tasks but is notorious for its low sample efficiency. In this paper, we propose RLingua, a framework that can leverage the internal knowledge of large language models (LLMs) to reduce the sample complexity of RL in robotic manipulations. To this end, we first present a method for extracting the prior knowledge of LLMs by prompt engineering so that a preliminary rule-based robot controller for a specific task can be generated in a user-friendly manner. Despite being imperfect, the LLM-generated robot controller is utilized to produce action samples during rollouts with a decaying probability, thereby improving RL's sample efficiency. We employ TD3, the widely-used RL baseline method, and modify the actor loss to regularize the policy learning towards the LLM-generated controller. RLingua also provides a novel method of improving the imperfect LLM-generated robot controllers by RL. We demonstrated that RLingua can significantly reduce the sample complexity of TD3 in four robot tasks of panda_gym and achieve high success rates in 12 sampled sparsely rewarded robot tasks in RLBench, where the standard TD3 fails. Additionally, We validated RLingua's effectiveness in real-world robot experiments through Sim2Real, demonstrating that the learned policies are effectively transferable to real robot tasks.

Panda Gym Experiments

Reach

Push

Pick & Place

Slide

RLbench Experiments

Push Button
Meat off Grill
Close Jar
Slide Block to Target
Put Item in Container
Take Lid off Saucepan

Real-World Experiments

Pick & Place

Utilizing the Sim2Real approach, the policies learned by RLingua show high transferability to real-world robot scenarios, especially in robot pick and place tasks.

External Interference

By continuously receiving real-time data from perceptual models, the learned policies can predict the current optimal actions, thus ensuring robustness against external interference.

Long Horizon

By altering the state of the target object and utilizing straightforward control logic, RLingua is also adept at performing long-horizon, complex tasks.