Robust Bandit Learning with Imperfect Context
Keywords:Online Learning & Bandits
AbstractA standard assumption in contextual multi-arm bandit is that the true context is perfectly known before arm selection. Nonetheless, in many practical applications (e.g., cloud resource management), prior to arm selection, the context information can only be acquired by prediction subject to errors or adversarial modification. In this paper, we study a novel contextual bandit setting in which only imperfect context is available for arm selection while the true context is revealed at the end of each round. We propose two robust arm selection algorithms: MaxMinUCB (Maximize Minimum UCB) which maximizes the worst-case reward, and MinWD (Minimize Worst-case Degradation) which minimizes the worst-case regret. Importantly, we analyze the robustness of MaxMinUCB and MinWD by deriving both regret and reward bounds compared to an oracle that knows the true context. Our results show that as time goes on, MaxMinUCB and MinWD both perform as asymptotically well as their optimal counterparts that know the reward function. Finally, we apply MaxMinUCB and MinWD to online edge datacenter selection, and run synthetic simulations to validate our theoretical analysis.
How to Cite
Yang, J., & Ren, S. (2021). Robust Bandit Learning with Imperfect Context. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 10594-10602. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17267
AAAI Technical Track on Machine Learning V