科学研究

Research

首页 >  论文  >  详情

Building Open-Ended Embodied Agent via Language-Policy Bidirectional Adaptation

发表会议及期刊:arXiv

Building Open-Ended Embodied Agent via Language-Policy Bidirectional Adaptation

 

Shaopeng Zhai * 1 Jie Wang * 1 Tianyi Zhang * 1 Fuxian Huang * 1Qi Zhang * 1 Ming Zhou * 1 Jing Hou * 2 1 Yu Qiao 1 Yu Liu 1


 Abstract

Building embodied agents on integrating Large Language Models (LLMs) and Reinforcement Learning (RL) have revolutionized human-AI interaction: researchers can now leverage language instructions to plan decision-making for open-ended tasks. However, existing research faces challenges in meeting the requirement of open-endedness. They typically either train LLM/RL models to adapt to a fixed counterpart, limiting exploration of novel skills and hindering the efficacy of human-AI interaction. To this end, we present OpenPAL, a co-training framework comprising two stages: (1) fine-tuning a pre-trained LLM to translate human instructions into goals for planning, and goal-conditioned training a policy for decision-making; (2) co-training to align the LLM and policy, achieving instruction open-endedness. We conducted experiments using Contra, an open-ended FPS game, demonstrating that an agent trained with OpenPAL not only comprehends arbitrary instructions but also exhibits efficient execution. These results suggest that OpenPAL holds the potential to construct open-ended embodied agents in practical scenarios.

comm@pjlab.org.cn

上海市徐汇区云锦路701号西岸国际人工智能中心37-38层

沪ICP备2021009351号-1