An Improved Algorithm for Learning to Perform Exception-Tolerant Abduction
DOI:
https://doi.org/10.1609/aaai.v31i1.10700Abstract
Inference from an observed or hypothesized condition to a plausible cause or explanation for this condition is known as abduction. For many tasks, the acquisition of the necessary knowledge by machine learning has been widely found to be highly effective. However, the semantics of learned knowledge are weaker than the usual classical semantics, and this necessitates new formulations of many tasks. We focus on a recently introduced formulation of the abductive inference task that is thus adapted to the semantics of machine learning. A key problem is that we cannot expect that our causes or explanations will be perfect, and they must tolerate some error due to the world being more complicated than our formalization allows. This is a version of the qualification problem, and in machine learning, this is known as agnostic learning. In the work by Juba that introduced the task of learning to make abductive inferences, an algorithm is given for producing k-DNF explanations that tolerates such exceptions: if the best possible k-DNF explanation fails to justify the condition with probability ε, then the algorithm is promised to find a k-DNF explanation that fails to justify the condition with probability at most O(nkε), where n is the number of propositional attributes used to describe the domain. Here, we present an improved algorithm for this task. When the best k- DNF fails with probability ε, our algorithm finds a k-DNF that fails with probability at most O ̃(nk/2ε) (i.e., suppressing logarithmic factors in n and 1/ε). We also examine the empirical advantage of this new algorithm over the previous algorithm in two test domains, one of explaining conditions generated by a “noisy” k-DNF rule, and another of explaining conditions that are actually generated by a linear threshold rule.