Unveiling the Significance of Toddler-Inspired Reward Transition in Goal-Oriented Reinforcement Learning
DOI:
https://doi.org/10.1609/aaai.v38i1.27815Keywords:
CMS: Simulating Human Behavior, HAI: Understanding People, Theories, Concepts and Methods, ML: Bio-inspired LearningAbstract
Toddlers evolve from free exploration with sparse feedback to exploiting prior experiences for goal-directed learning with denser rewards. Drawing inspiration from this Toddler-Inspired Reward Transition, we set out to explore the implications of varying reward transitions when incorporated into Reinforcement Learning (RL) tasks. Central to our inquiry is the transition from sparse to potential-based dense rewards, which share optimal strategies regardless of reward changes. Through various experiments, including those in egocentric navigation and robotic arm manipulation tasks, we found that proper reward transitions significantly influence sample efficiency and success rates. Of particular note is the efficacy of the toddler-inspired Sparse-to-Dense (S2D) transition. Beyond these performance metrics, using Cross-Density Visualizer technique, we observed that transitions, especially the S2D, smooth the policy loss landscape, promoting wide minima that enhance generalization in RL models.Downloads
Published
2024-03-25
How to Cite
Park, J., Kim, Y., Yoo, H. bin, Lee, M. W., Kim, K., Choi, W.-S., Lee, M., & Zhang, B.-T. (2024). Unveiling the Significance of Toddler-Inspired Reward Transition in Goal-Oriented Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(1), 592-600. https://doi.org/10.1609/aaai.v38i1.27815
Issue
Section
AAAI Technical Track on Cognitive Modeling & Cognitive Systems