OmniCount: Multi-label Object Counting with Semantic-Geometric Priors

Authors

  • Anindya Mondal University of Surrey
  • Sauradip Nag Simon Fraser University
  • Xiatian Zhu University of Surrey
  • Anjan Dutta University of Surrey

DOI:

https://doi.org/10.1609/aaai.v39i18.34151

Abstract

Object counting is pivotal for understanding the composition of scenes. Previously, this task was dominated by class-specific methods, which have gradually evolved into more adaptable class-agnostic strategies. However, these strategies come with their own set of limitations, such as the need for manual exemplar input and multiple passes for multiple categories, resulting in significant inefficiencies. This paper introduces a more practical approach enabling simultaneous counting of multiple object categories using an open-vocabulary framework. Our solution, OmniCount, stands out by using semantic and geometric insights (priors) from pre-trained models to count multiple categories of objects as specified by users, all without additional training. OmniCount distinguishes itself by generating precise object masks and leveraging varied interactive prompts via the Segment Anything Model for efficient counting. To evaluate OmniCount, we created the OmniCount-191 benchmark, a first-of-its-kind dataset with multi-label object counts, including points, bounding boxes, and VQA annotations. Our comprehensive evaluation in OmniCount-191, alongside other leading benchmarks, demonstrates OmniCount's exceptional performance, significantly outpacing existing solutions.

Published

2025-04-11

How to Cite

Mondal, A., Nag, S., Zhu, X., & Dutta, A. (2025). OmniCount: Multi-label Object Counting with Semantic-Geometric Priors. Proceedings of the AAAI Conference on Artificial Intelligence, 39(18), 19537-19545. https://doi.org/10.1609/aaai.v39i18.34151

Issue

Section

AAAI Technical Track on Machine Learning IV