EventPillars: Pillar-based Efficient Representations for Event Data

Authors

  • Rui Fan Key Laboratory of Analog Integrated Circuits and Systems (Ministry of Education) School of Integrated Circuits, Xidian University, Xi’an 710071, China
  • Weidong Hao Key Laboratory of Analog Integrated Circuits and Systems (Ministry of Education) School of Integrated Circuits, Xidian University, Xi’an 710071, China
  • Juntao Guan Key Laboratory of Analog Integrated Circuits and Systems (Ministry of Education) School of Integrated Circuits, Xidian University, Xi’an 710071, China Hangzhou Institute of Technology, Xidian University, Hangzhou, China
  • Lai Rui Key Laboratory of Analog Integrated Circuits and Systems (Ministry of Education) School of Integrated Circuits, Xidian University, Xi’an 710071, China
  • Lin Gu RIKEN AIP, Tokyo103-0027, Japan The University of Tokyo, Japan
  • Tong Wu Key Laboratory of Analog Integrated Circuits and Systems (Ministry of Education) School of Integrated Circuits, Xidian University, Xi’an 710071, China
  • Fanhong Zeng Key Laboratory of Analog Integrated Circuits and Systems (Ministry of Education) School of Integrated Circuits, Xidian University, Xi’an 710071, China
  • Zhangming Zhu Key Laboratory of Analog Integrated Circuits and Systems (Ministry of Education) School of Integrated Circuits, Xidian University, Xi’an 710071, China

DOI:

https://doi.org/10.1609/aaai.v39i3.32292

Abstract

Event Cameras offer appealing advantages, including power efficiency and ultra-low latency, driving forward advancements in edge applications. In order to leverage mature frame-based algorithms, most approaches typically compute dense, image-like representations from sparse, asynchronous events. However, they are often unable to capture comprehensive information or are computationally intensive, which hinders the edge deployment of event-based vision. Meanwhile, pillar-based paradigms have been proven to be efficient and well established for dense representations of sparse data. Hence, from a novel pillar-based perspective, we present EventPillars, an efficient, comprehensive framework for dense event representations. To summarize, it (i) incorporates the Temporal Event Range to describe an intact temporal distribution, (ii) Activates the Event Polarities to explicitly record the scene dynamics, (iii) enhances the target awareness by a spatial attention prior from Normalized Event Density, (iv) can be plug-and-played into different downstream tasks. Extensive experiments show that our EventPillars records a new state-of-the-art precision on object recognition and detection datasets with surprisingly 9.2× and 4.5× lower computation and storage consumption. This brings a new insight into dense event representations and is promising to boost the edge deployment of event-based vision.

Downloads

Published

2025-04-11

How to Cite

Fan, R., Hao, W., Guan, J., Rui, L., Gu, L., Wu, T., … Zhu, Z. (2025). EventPillars: Pillar-based Efficient Representations for Event Data. Proceedings of the AAAI Conference on Artificial Intelligence, 39(3), 2861–2869. https://doi.org/10.1609/aaai.v39i3.32292

Issue

Section

AAAI Technical Track on Computer Vision II