科学研究

Research

首页 >  论文  >  详情

Multi-Task Reinforcement Learning with Soft Modularization

发表会议及期刊:NeurIPS

Ruihan Yang 1 Huazhe Xu 2 Yi Wu 3,4 Xiaolong Wang 1

1UC San Diego 2UC Berkeley 3IIIS, Tsinghua 4Shanghai Qi Zhi Institute

Abstract :

Multi-task learning is a very challenging problem in reinforcement learning. While training multiple tasks jointly allow the policies to share parameters across different tasks, the optimization problem becomes non-trivial: It remains unclear what parameters in the network should be reused across tasks, and how the gradients from different tasks may interfere with each other. Thus, instead of naively sharing parameters across tasks, we introduce an explicit modularization technique on policy representation to alleviate this optimization issue. Given a base policy network, we design a routing network which estimates different routing strategies to reconfigure the base network for each task. Instead of directly selecting routes for each task, our task-specific policy uses a method called soft modularization to softly combine all the possible routes, which makes it suitable for sequential tasks. We experiment with various robotics manipulation tasks in simulation and show our method improves both sample efficiency and performance over strong baselines by a large margin. Our project page with code is at https://rchalyang.github.io/SoftModule/.

comm@pjlab.org.cn

上海市徐汇区云锦路701号西岸国际人工智能中心37-38层

沪ICP备2021009351号-1