Scheduled Policy Optimization for Natural Language Communication with
We investigate the task of learning to follow natural language instructions
by jointly reasoning with visual observations and language inputs. In contrast
to existing methods which start with learning from demonstrations (LfD) and
then use reinforcement learning (RL) to fine-tune the model parameters, we
propose a novel policy optimization algorithm which dynamically schedules
demonstration learning and RL. The proposed training paradigm provides
efficient exploration and better generalization beyond existing methods.
Comparing to existing ensemble models, the best single model based on our
proposed method tremendously decreases the execution error by over 50% on a
block-world environment. To further illustrate the exploration strategy of our
RL algorithm, We also include systematic studies on the evolution of policy
entropy during training.