CTIN: Robust Contextual Transformer Network for Inertial Navigation

Authors

  • Bingbing Rao University of Central Florida, Orlando, FL
  • Ehsan Kazemi University of Central Florida, Orlando, FL Unknot. id Inc., Orlando, FL
  • Yifan Ding University of Central Florida, Orlando, FL
  • Devu M Shila Unknot. id Inc., Orlando, FL
  • Frank M Tucker U.S. Army CCDC SC, Orlando, FL
  • Liqiang Wang University of Central Florida, Orlando, FL

DOI:

https://doi.org/10.1609/aaai.v36i5.20479

Keywords:

Intelligent Robotics (ROB)

Abstract

Recently, data-driven inertial navigation approaches have demonstrated their capability of using well-trained neural networks to obtain accurate position estimates from inertial measurement units (IMUs) measurements. In this paper, we propose a novel robust Contextual Transformer-based network for Inertial Navigation (CTIN) to accurately predict velocity and trajectory. To this end, we first design a ResNet-based encoder enhanced by local and global multi-head self-attention to capture spatial contextual information from IMU measurements. Then we fuse these spatial representations with temporal knowledge by leveraging multi-head attention in the Transformer decoder. Finally, multi-task learning with uncertainty reduction is leveraged to improve learning efficiency and prediction accuracy of velocity and trajectory. Through extensive experiments over a wide range of inertial datasets (e.g., RIDI, OxIOD, RoNIN, IDOL, and our own), CTIN is very robust and outperforms state-of-the-art models.

Downloads

Published

2022-06-28

How to Cite

Rao, B., Kazemi, E., Ding, Y., Shila, D. M., Tucker, F. M., & Wang, L. (2022). CTIN: Robust Contextual Transformer Network for Inertial Navigation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(5), 5413-5421. https://doi.org/10.1609/aaai.v36i5.20479

Issue

Section

AAAI Technical Track on Intelligent Robotics