Concept-Guided LLM Agents for Human-AI Safety Codesign


  • Florian Geissler Fraunhofer IKS, Fraunhofer Institute for Cognitive Systems IKS
  • Karsten Roscher Fraunhofer IKS, Fraunhofer Institute for Cognitive Systems IKS
  • Mario Trapp Fraunhofer IKS, Fraunhofer Institute for Cognitive Systems IKS School of Computation and Information and Technology, Technical University of Munich



Safety, Generative AI, Large Language Model Agents, Human-AI Codesign


Generative AI is increasingly important in software engineering, including safety engineering, where its use ensures that software does not cause harm to people. This also leads to high quality requirements for generative AI. Therefore, the simplistic use of Large Language Models (LLMs) alone will not meet these quality demands. It is crucial to develop more advanced and sophisticated approaches that can effectively address the complexities and safety concerns of software systems. Ultimately, humans must understand and take responsibility for the suggestions provided by generative AI to ensure system safety. To this end, we present an efficient, hybrid strategy to leverage LLMs for safety analysis and Human-AI codesign. In particular, we develop a customized LLM agent that uses elements of prompt engineering, heuristic reasoning, and retrieval-augmented generation to solve tasks associated with predefined safety concepts, in interaction with a system model graph. The reasoning is guided by a cascade of micro-decisions that help preserve structured information. We further suggest a graph verbalization which acts as an intermediate representation of the system model to facilitate LLM-graph interactions. Selected pairs of prompts and responses relevant for safety analytics illustrate our method for the use case of a simplified automated driving system.






Empowering Machine Learning and Large Language Models with Domain and Commonsense Knowledge