Learning Abduction Using Partial Observability


  • Brendan Juba Washington University in St. Louis
  • Zongyi Li Washington University in St. Louis
  • Evan Miller Washington University in St. Louis




Abductive Reasoning, Knowledge Acquisition


Juba recently proposed a formulation of learning abductive reasoning from examples, in which both the relative plausibility of various explanations, as well as which explanations are valid, are learned directly from data. The main shortcoming of this formulation of the task is that it assumes access to full-information (i.e., fully specified) examples; relatedly, it offers no role for declarative background knowledge, as such knowledge is rendered redundant in the abduction task by complete information. In this work we extend the formulation to utilize such partially specified examples, along with declarative background knowledge about the missing data. We show that it is possible to use implicitly learned rules together with the explicitly given declarative knowledge to support hypotheses in the course of abduction. We also show how to use knowledge in the form of graphical causal models to refine the proposed hypotheses. Finally, we observe that when a small explanation exists, it is possible to obtain a much-improved guarantee in the challenging exception-tolerant setting. Such small, human-understandable explanations are of particular interest for potential applications of the task.




How to Cite

Juba, B., Li, Z., & Miller, E. (2018). Learning Abduction Using Partial Observability. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11571



AAAI Technical Track: Knowledge Representation and Reasoning