Learning Content-Enhanced Mask Transformer for Domain Generalized Urban-Scene Segmentation

Authors

  • Qi Bi University of Amsterdam
  • Shaodi You University of Amsterdam
  • Theo Gevers University of Amsterdam

DOI:

https://doi.org/10.1609/aaai.v38i2.27840

Keywords:

CV: Vision for Robotics & Autonomous Driving, CV: Segmentation

Abstract

Domain-generalized urban-scene semantic segmentation (USSS) aims to learn generalized semantic predictions across diverse urban-scene styles. Unlike generic domain gap challenges, USSS is unique in that the semantic categories are often similar in different urban scenes, while the styles can vary significantly due to changes in urban landscapes, weather conditions, lighting, and other factors. Existing approaches typically rely on convolutional neural networks (CNNs) to learn the content of urban scenes. In this paper, we propose a Content-enhanced Mask TransFormer (CMFormer) for domain-generalized USSS. The main idea is to enhance the focus of the fundamental component, the mask attention mechanism, in Transformer segmentation models on content information. We have observed through empirical analysis that a mask representation effectively captures pixel segments, albeit with reduced robustness to style variations. Conversely, its lower-resolution counterpart exhibits greater ability to accommodate style variations, while being less proficient in representing pixel segments. To harness the synergistic attributes of these two approaches, we introduce a novel content-enhanced mask attention mechanism. It learns mask queries from both the image feature and its down-sampled counterpart, aiming to simultaneously encapsulate the content and address stylistic variations. These features are fused into a Transformer decoder and integrated into a multi-resolution content-enhanced mask attention learning scheme. Extensive experiments conducted on various domain-generalized urban-scene segmentation datasets demonstrate that the proposed CMFormer significantly outperforms existing CNN-based methods by up to 14.0% mIoU and the contemporary HGFormer by up to 1.7% mIoU. The source code is publicly available at https://github.com/BiQiWHU/CMFormer.

Published

2024-03-24

How to Cite

Bi, Q., You, S., & Gevers, T. (2024). Learning Content-Enhanced Mask Transformer for Domain Generalized Urban-Scene Segmentation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(2), 819-827. https://doi.org/10.1609/aaai.v38i2.27840

Issue

Section

AAAI Technical Track on Computer Vision I