Transmission-Guided Bayesian Generative Model for Smoke Segmentation


  • Siyuan Yan Australian National University
  • Jing Zhang Australian National University
  • Nick Barnes Australian National University



Computer Vision (CV), Machine Learning (ML)


Smoke segmentation is essential to precisely localize wildfire so that it can be extinguished in an early phase. Although deep neural networks have achieved promising results on image segmentation tasks, they are prone to be overconfident for smoke segmentation due to its non-rigid shape and transparent appearance. This is caused by both knowledge level uncertainty due to limited training data for accurate smoke segmentation and labeling level uncertainty representing the difficulty in labeling ground-truth. To effectively model the two types of uncertainty, we introduce a Bayesian generative model to simultaneously estimate the posterior distribution of model parameters and its predictions. Further, smoke images suffer from low contrast and ambiguity, inspired by physics-based image dehazing methods, we design a transmission-guided local coherence loss to guide the network to learn pair-wise relationships based on pixel distance and the transmission feature. To promote the development of this field, we also contribute a high-quality smoke segmentation dataset, SMOKE5K, consisting of 1,400 real and 4,000 synthetic images with pixel-wise annotation. Experimental results on benchmark testing datasets illustrate that our model achieves both accurate predictions and reliable uncertainty maps representing model ignorance about its prediction. Our code and dataset are publicly available at:




How to Cite

Yan, S., Zhang, J., & Barnes, N. (2022). Transmission-Guided Bayesian Generative Model for Smoke Segmentation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(3), 3009-3017.



AAAI Technical Track on Computer Vision III