Approximately Optimal Risk-Averse Routing Policies via Adaptive Discretization

Authors

  • Darrell Hoy Northwestern University
  • Evdokia Nikolova University of Texas at Austin

DOI:

https://doi.org/10.1609/aaai.v29i1.9703

Keywords:

risk-aversion, planning under uncertainty, routing, markov decision process, approximation

Abstract

Mitigating risk in decision-making has been a long-standing problem. Due to the mathematical challenge of its nonlinear nature, especially in adaptive decision-making problems, finding optimal policies is typically intractable. With a focus on efficient algorithms, we ask how well we can approximate the optimal policies for the difficult case of general utility models of risk. Little is known about efficient algorithms beyond the very special cases of linear (risk-neutral) and exponential utilities since general utilities are not separable and preclude the use of traditional dynamic programming techniques. In this paper, we consider general utility functions and investigate efficient computation of approximately optimal routing policies, where the goal is to maximize the expected utility of arriving at a destination around a given deadline. We present an adaptive discretization variant of successive approximation which gives an $\error$-optimal policy in polynomial time. The main insight is to perform discretization at the utility level space, which results in a nonuniform discretization of the domain, and applies for any monotone utility function.

Downloads

Published

2015-03-04

How to Cite

Hoy, D., & Nikolova, E. (2015). Approximately Optimal Risk-Averse Routing Policies via Adaptive Discretization. Proceedings of the AAAI Conference on Artificial Intelligence, 29(1). https://doi.org/10.1609/aaai.v29i1.9703

Issue

Section

AAAI Technical Track: Reasoning under Uncertainty