Mental Model-based Generation of Lies for Insider Threat Modeling

Authors

  • Brittany Cates Colorado State University
  • Sarath Sreedharan Colorado State University

DOI:

https://doi.org/10.1609/aaai.v40i35.40173

Abstract

It is well understood that mental modeling forms the foundation of many everyday interactions between humans. This includes both collaborative and deceptive interactions. One could argue that the modeling and manipulation of mental states lies at the heart of effective deception. In this paper, we examine the security problem of insider threat attacks. In this case, an adversary has already infiltrated an organization. The primary challenge for this attacker is to avoid suspicion until their true goal can be achieved. We see how existing model-based explanatory methods can be leveraged to generate lies that explain away potentially suspicious activities. We also propose a novel planning formulation which generates plans that appear to achieve an assigned goal while getting close enough to reach an alternative, covert goal. We evaluate our method through computational experiments and a user study.

Published

2026-03-14

How to Cite

Cates, B., & Sreedharan, S. (2026). Mental Model-based Generation of Lies for Insider Threat Modeling. Proceedings of the AAAI Conference on Artificial Intelligence, 40(35), 29332-29340. https://doi.org/10.1609/aaai.v40i35.40173

Issue

Section

AAAI Technical Track on Multiagent Systems