The Unreasonable Effectiveness of Inverse Reinforcement Learning in Advancing Cancer Research

Authors

  • John Kalantari Mayo Clinic
  • Heidi Nelson Mayo Clinic
  • Nicholas Chia Mayo Clinic

DOI:

https://doi.org/10.1609/aaai.v34i01.5380

Abstract

The “No Free Lunch” theorem states that for any algorithm, elevated performance over one class of problems is offset by its performance over another. Stated differently, no algorithm works for everything. Instead, designing effective algorithms often means exploiting prior knowledge of data relationships specific to a given problem. This “unreasonable efficacy” is especially desirable for complex and seemingly intractable problems in the natural sciences. One such area that is rife with the need for better algorithms is cancer biology—a field where relatively few insights are being generated from relatively large amounts of data. In part, this is due to the inability of mere statistics to reflect cancer as a genetic evolutionary process—one that involves cells actively mutating in order to navigate host barriers, outcompete neighboring cells, and expand spatially.

Our work is built upon the central proposition that the Markov Decision Process (MDP) can better represent the process by which cancer arises and progresses. More specifically, by encoding a cancer cell's complex behavior as a MDP, we seek to model the series of genetic changes, or evolutionary trajectory, that leads to cancer as an optimal decision process. We posit that using an Inverse Reinforcement Learning (IRL) approach will enable us to reverse engineer an optimal policy and reward function based on a set of “expert demonstrations” extracted from the DNA of patient tumors. The inferred reward function and optimal policy can subsequently be used to extrapolate the evolutionary trajectory of any tumor. Here, we introduce a Bayesian nonparametric IRL model (PUR-IRL) where the number of reward functions is a priori unbounded in order to account for uncertainty in cancer data, i.e., the existence of latent trajectories and non-uniform sampling. We show that PUR-IRL is “unreasonably effective” in gaining interpretable and intuitive insights about cancer progression from high-dimensional genome data.

Downloads

Published

2020-04-03

How to Cite

Kalantari, J., Nelson, H., & Chia, N. (2020). The Unreasonable Effectiveness of Inverse Reinforcement Learning in Advancing Cancer Research. Proceedings of the AAAI Conference on Artificial Intelligence, 34(01), 437-445. https://doi.org/10.1609/aaai.v34i01.5380

Issue

Section

AAAI Special Technical Track: AI for Social Impact