RLogist: Fast Observation Strategy on Whole-Slide Images with Deep Reinforcement Learning
DOI:
https://doi.org/10.1609/aaai.v37i3.25467Keywords:
CV: Medical and Biological Imaging, APP: Healthcare, Medicine & Wellness, ML: Reinforcement Learning Algorithms, PRS: RoutingAbstract
Whole-slide images (WSI) in computational pathology have high resolution with gigapixel size, but are generally with sparse regions of interest, which leads to weak diagnostic relevance and data inefficiency for each area in the slide. Most of the existing methods rely on a multiple instance learning framework that requires densely sampling local patches at high magnification. The limitation is evident in the application stage as the heavy computation for extracting patch-level features is inevitable. In this paper, we develop RLogist, a benchmarking deep reinforcement learning (DRL) method for fast observation strategy on WSIs. Imitating the diagnostic logic of human pathologists, our RL agent learns how to find regions of observation value and obtain representative features across multiple resolution levels, without having to analyze each part of the WSI at the high magnification. We benchmark our method on two whole-slide level classification tasks, including detection of metastases in WSIs of lymph node sections, and subtyping of lung cancer. Experimental results demonstrate that RLogist achieves competitive classification performance compared to typical multiple instance learning algorithms, while having a significantly short observation path. In addition, the observation path given by RLogist provides good decision-making interpretability, and its ability of reading path navigation can potentially be used by pathologists for educational/assistive purposes. Our code is available at: https://github.com/tencent-ailab/RLogist.Downloads
Published
2023-06-26
How to Cite
Zhao, B., Zhang, J., Ye, D., Cao, J., Han, X., Fu, Q., & Yang, W. (2023). RLogist: Fast Observation Strategy on Whole-Slide Images with Deep Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(3), 3570-3578. https://doi.org/10.1609/aaai.v37i3.25467
Issue
Section
AAAI Technical Track on Computer Vision III