Edge LLMs for Real-Time Contextual Understanding with Ground Robots

Authors

  • Tamil Selvan Gurunathan UMBC
  • Muhammad Shehrose Raza UMBC
  • Aswin Kumar Janakiraman UMBC
  • Md Azim Khan UMBC
  • Biplab Pal UMBC
  • Aryya Gangopadhyay UMBC

DOI:

https://doi.org/10.1609/aaaiss.v5i1.35583

Abstract

We propose a novel framework that leverages Edge Large Language Models (LLMs) for real-time decision-making and contextual understanding on robotic platforms. By embedding LLMs directly on edge devices, the system enables autonomous operations in zero-visibility environments such as tunnels, adverse weather, or tactical obstructions. The framework integrates multi-modal sensor inputs, including mmWave radar and thermal cameras, and employs pretrained LLMs fine-tuned for low-latency inference under strict computational constraints. Experiments demonstrate the framework’s ability to navigate, detect threats, and prioritize tasks such as medical assistance, achieving high semantic accuracy, and significantly outperforming baseline methods like Few-Shot Learning and Prompt Engineering. Furthermore, the system is scalable to diverse applications, including search and rescue, tactical operations, and multi-robot coordination. This work highlights the transformative potential of Edge LLMs in enabling intelligent, reliable, and autonomous robotic systems for dynamic and resource-constrained environments.

Downloads

Published

2025-05-28

How to Cite

Gurunathan, T. S., Raza, M. S., Janakiraman, A. K., Khan, M. A., Pal, B., & Gangopadhyay, A. (2025). Edge LLMs for Real-Time Contextual Understanding with Ground Robots. Proceedings of the AAAI Symposium Series, 5(1), 159–166. https://doi.org/10.1609/aaaiss.v5i1.35583

Issue

Section

GenAI@Edge: Empowering Generative AI at the Edge