Structured Output Prediction for Semantic Perception in Autonomous Vehicles

Authors

  • Rein Houthooft Ghent University and iMinds
  • Cedric De Boom Ghent University and iMinds
  • Stijn Verstichel Ghent University and iMinds
  • Femke Ongenae Ghent University and iMinds
  • Filip De Turck Ghent University and iMinds

DOI:

https://doi.org/10.1609/aaai.v30i1.10447

Keywords:

structured prediction, autonomous vehicles, segmentation

Abstract

A key challenge in the realization of autonomous vehicles is the machine's ability to perceive its surrounding environment. This task is tackled through a model that partitions vehicle camera input into distinct semantic classes, by taking into account visual contextual cues. The use of structured machine learning models is investigated, which not only allow for complex input, but also arbitrarily structured output. Towards this goal, an outdoor road scene dataset is constructed with accompanying fine-grained image labelings. For coherent segmentation, a structured predictor is modeled to encode label distributions conditioned on the input images. After optimizing this model through max-margin learning, based on an ontological loss function, efficient classification is realized via graph cuts inference using alpha-expansion. Both quantitative and qualitative analyses demonstrate that by taking into account contextual relations between pixel segmentation regions within a second-degree neighborhood, spurious label assignments are filtered out, leading to highly accurate semantic segmentations for outdoor scenes.

Downloads

Published

2016-03-05

How to Cite

Houthooft, R., De Boom, C., Verstichel, S., Ongenae, F., & De Turck, F. (2016). Structured Output Prediction for Semantic Perception in Autonomous Vehicles. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10447