Learning Mixture of Domain-Specific Experts via Disentangled Factors for Autonomous Driving
DOI:
https://doi.org/10.1609/aaai.v36i1.20000Keywords:
Computer Vision (CV)Abstract
Since human drivers only consider the driving-related factors that affect vehicle control depending on the situation, they can drive safely even in diverse driving environments. To mimic this behavior, we propose an autonomous driving framework based on the two-stage representation learning that initially splits the latent features as domain-specific features and domain-general features. Subsequently, the dynamic-object features, which contain information of dynamic objects, are disentangled from latent features using mutual information estimator. In this study, the problem in behavior cloning is divided into several domain-specific subspaces, with experts becoming specialized on each domain-specific policy. The proposed mixture of domain-specific experts (MoDE) model predicts the final control values through the cooperation of experts using a gating function. The domain-specific features are used to calculate the importance weight of the domain-specific experts, and the disentangled domain-general and dynamic-object features are applied in estimating the control values. To validate the proposed MoDE model, we conducted several experiments and achieved a higher success rate on the CARLA benchmarks under several conditions and tasks than state-of-the-art approaches.Downloads
Published
2022-06-28
How to Cite
Kim, I., Lee, J., & Kim, D. (2022). Learning Mixture of Domain-Specific Experts via Disentangled Factors for Autonomous Driving. Proceedings of the AAAI Conference on Artificial Intelligence, 36(1), 1148-1156. https://doi.org/10.1609/aaai.v36i1.20000
Issue
Section
AAAI Technical Track on Computer Vision I