Solving JumpIN’ Using Zero-Dependency Reinforcement Learning (Student Abstract)

Authors

  • Rachel Ostic University of Ottawa
  • Oliver Benning University of Ottawa
  • Patrick Boily University of Ottawa Data Action Lab, Ottawa Idlewyld Analytics and Consulting Services, Wakefield

DOI:

https://doi.org/10.1609/aaai.v35i18.17927

Keywords:

Reinforcement Learning, Single-player Game, Q-learning, JumpIN'

Abstract

Reinforcement learning seeks to teach agents to solve problems using numerical rewards as feedback. This makes it possible to incentivize actions that maximize returns despite having no initial strategy or knowledge of their environment. We implement a zero-external-dependency Q-learning algorithm using Python to optimally solve the single-player game JumpIn’ from SmartGames. We focus on interpretability of the model using Q-table parsing, and transferability to other games through a modular code structure. We observe rapid performance gains using our backtracking update algorithm.

Downloads

Published

2021-05-18

How to Cite

Ostic, R., Benning, O., & Boily, P. (2021). Solving JumpIN’ Using Zero-Dependency Reinforcement Learning (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 35(18), 15861-15862. https://doi.org/10.1609/aaai.v35i18.17927

Issue

Section

AAAI Student Abstract and Poster Program