Domain-General Crowd Counting in Unseen Scenarios

Authors

  • Zhipeng Du King's College London
  • Jiankang Deng Huawei London Research Center
  • Miaojing Shi Tongji University King's College London

DOI:

https://doi.org/10.1609/aaai.v37i1.25131

Keywords:

CV: Applications

Abstract

Domain shift across crowd data severely hinders crowd counting models to generalize to unseen scenarios. Although domain adaptive crowd counting approaches close this gap to a certain extent, they are still dependent on the target domain data to adapt (e.g. finetune) their models to the specific domain. In this paper, we instead target to train a model based on a single source domain which can generalize well on any unseen domain. This falls into the realm of domain generalization that remains unexplored in crowd counting. We first introduce a dynamic sub-domain division scheme which divides the source domain into multiple sub-domains such that we can initiate a meta-learning framework for domain generalization. The sub-domain division is dynamically refined during the meta-learning. Next, in order to disentangle domain-invariant information from domain-specific information in image features, we design the domain-invariant and -specific crowd memory modules to re-encode image features. Two types of losses, i.e. feature reconstruction and orthogonal losses, are devised to enable this disentanglement. Extensive experiments on several standard crowd counting benchmarks i.e. SHA, SHB, QNRF, and NWPU, show the strong generalizability of our method. Our code is available at: https://github.com/ZPDu/Domain-general-Crowd-Counting-in-Unseen-Scenarios

Downloads

Published

2023-06-26

How to Cite

Du, Z., Deng, J., & Shi, M. (2023). Domain-General Crowd Counting in Unseen Scenarios. Proceedings of the AAAI Conference on Artificial Intelligence, 37(1), 561-570. https://doi.org/10.1609/aaai.v37i1.25131

Issue

Section

AAAI Technical Track on Computer Vision I