Visual Place Recognition via Robust ℓ2-Norm Distance Based Holism and Landmark Integration

Authors

  • Kai Liu Colorado School of Mines
  • Hua Wang Colorado School of Mines
  • Fei Han Colorado School of Mines
  • Hao Zhang Colorado School of Mines

DOI:

https://doi.org/10.1609/aaai.v33i01.33018034

Abstract

Visual place recognition is essential for large-scale simultaneous localization and mapping (SLAM). Long-term robot operations across different time of the days, months, and seasons introduce new challenges from significant environment appearance variations. In this paper, we propose a novel method to learn a location representation that can integrate the semantic landmarks of a place with its holistic representation. To promote the robustness of our new model against the drastic appearance variations due to long-term visual changes, we formulate our objective to use non-squared ℓ2-norm distances, which leads to a difficult optimization problem that minimizes the ratio of the ℓ2,1-norms of matrices. To solve our objective, we derive a new efficient iterative algorithm, whose convergence is rigorously guaranteed by theory. In addition, because our solution is strictly orthogonal, the learned location representations can have better place recognition capabilities. We evaluate the proposed method using two large-scale benchmark data sets, the CMU-VL and Nordland data sets. Experimental results have validated the effectiveness of our new method in long-term visual place recognition applications.

Downloads

Published

2019-07-17

How to Cite

Liu, K., Wang, H., Han, F., & Zhang, H. (2019). Visual Place Recognition via Robust ℓ2-Norm Distance Based Holism and Landmark Integration. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 8034-8041. https://doi.org/10.1609/aaai.v33i01.33018034

Issue

Section

AAAI Technical Track: Robotics