Forgetful Large Language Models: Lessons Learned from Using LLMs in Robot Programming

Authors

  • Juo-Tung Chen Johns Hopkins University
  • Chien-Ming Huang Johns Hopkins University

DOI:

https://doi.org/10.1609/aaaiss.v2i1.27721

Keywords:

Large Language Model, Robot Programming, Code Generation, Prompt Engineering

Abstract

Large language models offer new ways of empowering people to program robot applications-namely, code generation via prompting. However, the code generated by LLMs is susceptible to errors. This work reports a preliminary exploration that empirically characterizes common errors produced by LLMs in robot programming. We categorize these errors into two phases: interpretation and execution. In this work, we focus on errors in execution and observe that they are caused by LLMs being “forgetful” of key information provided in user prompts. Based on this observation, we propose prompt engineering tactics designed to reduce errors in execution. We then demonstrate the effectiveness of these tactics with three language models: ChatGPT, Bard, and LLaMA-2. Finally, we discuss lessons learned from using LLMs in robot programming and call for the benchmarking of LLM-powered end-user development of robot applications.

Downloads

Published

2024-01-22

Issue

Section

Unifying Representations for Robot Application Development