Improving Causal Inference by Increasing Model Expressiveness

Authors

  • David D. Jensen University of Massachusetts Amherst

DOI:

https://doi.org/10.1609/aaai.v35i17.17767

Keywords:

Machine Learning, Causal Inference, Knowledge Representation

Abstract

The ability to learn and reason with causal knowledge is a key aspect of intelligent behavior. In contrast to mere statistical association, knowledge of causation enables reasoning about the effects of actions. Causal reasoning is vital for autonomous agents and for a range of applications in science, medicine, business, and government. However, current methods for causal inference are hobbled because they use relatively inexpressive models. Surprisingly, current causal models eschew nearly every major representational innovation common in a range of other fields both inside and outside of computer science, including representation of objects, relationships, time, space, and hierarchy. Even more surprisingly, a range of recent research provides strong evidence that more expressive representations make possible causal inferences that are otherwise impossible and remove key biases that would otherwise afflict more naive inferences. New research on causal inference should target increases in expressiveness to improve accuracy and effectiveness.

Downloads

Published

2021-05-18

How to Cite

Jensen, D. D. (2021). Improving Causal Inference by Increasing Model Expressiveness. Proceedings of the AAAI Conference on Artificial Intelligence, 35(17), 15053-15057. https://doi.org/10.1609/aaai.v35i17.17767

Issue

Section

Senior Member Presentation: Blue Sky Papers