Reasoning about Cognitive Trust in Stochastic Multiagent Systems

Authors

  • Xiaowei Huang University of Oxford
  • Marta Kwiatkowska University of Oxford

DOI:

https://doi.org/10.1609/aaai.v31i1.11050

Keywords:

Probabilistic logic, temporal logic, multiagent systems, autonomous systems, cognitive modelling, trust, model checking

Abstract

We consider the setting of stochastic multiagent systems and formulate an automated verification framework for quantifying and reasoning about agents' trust. To capture human trust, we work with a cognitive notion of trust defined as a subjective evaluation that agent A makes about agent B's ability to complete a task, which in turn may lead to a decision by A to rely on B. We propose a probabilistic rational temporal logic PRTL*, which extends the logic PCTL* with reasoning about mental attitudes (beliefs, goals and intentions), and includes novel operators that can express concepts of social trust such as competence, disposition and dependence. The logic can express, for example, that "agent A will eventually trust agent B with probability at least p that B will be have in a way that ensures the successful completion of a given task". We study the complexity of the automated verification problem and, while the general problem is undecidable, we identify restrictions on the logic and the system that result in decidable, or even tractable, subproblems.

Downloads

Published

2017-02-12

How to Cite

Huang, X., & Kwiatkowska, M. (2017). Reasoning about Cognitive Trust in Stochastic Multiagent Systems. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.11050

Issue

Section

AAAI Technical Track: Reasoning under Uncertainty