Rethinking Mutual Information for Language Conditioned Skill Discovery on Imitation Learning
DOI:
https://doi.org/10.1609/icaps.v34i1.31488Abstract
Language-conditioned robot behavior plays a vital role in executing complex tasks by associating human commands or instructions with perception and actions. The ability to compose long-horizon tasks based on unconstrained language instructions necessitates the acquisition of a diverse set of general-purpose skills.However, acquiring inherent primitive skills in a coupled and long-horizon environment without external rewards or human supervision presents significant challenges. In this paper, we evaluate the relationship between skills and language instructions from a mathematical perspective, employing two forms of mutual information within the framework of language-conditioned policy learning.To maximize the mutual information between language and skills in an unsupervised manner, we propose an end-to-end imitation learning approach known as Language Conditioned Skill Discovery (LCSD). Specifically, we utilize vector quantization to learn discrete latent skills and leverage skill sequences of trajectories to reconstruct high-level semantic instructions.Through extensive experiments on language-conditioned robotic navigation and manipulation tasks, encompassing BabyAI, LORel, and Calvin, we demonstrate the superiority of our method over prior works. Our approach exhibits enhanced generalization capabilities towards unseen tasks, improved skill interpretability, and notably higher rates of task completion success.Downloads
Published
2024-05-30
How to Cite
Ju, Z., Yang, C., Sun, F., Wang, H., & Qiao, Y. (2024). Rethinking Mutual Information for Language Conditioned Skill Discovery on Imitation Learning. Proceedings of the International Conference on Automated Planning and Scheduling, 34(1), 301-309. https://doi.org/10.1609/icaps.v34i1.31488