Improving Causal Inference by Increasing Model Expressiveness


  • David D. Jensen University of Massachusetts Amherst


Machine Learning, Causal Inference, Knowledge Representation


The ability to learn and reason with causal knowledge is a key aspect of intelligent behavior. In contrast to mere statistical association, knowledge of causation enables reasoning about the effects of actions. Causal reasoning is vital for autonomous agents and for a range of applications in science, medicine, business, and government. However, current methods for causal inference are hobbled because they use relatively inexpressive models. Surprisingly, current causal models eschew nearly every major representational innovation common in a range of other fields both inside and outside of computer science, including representation of objects, relationships, time, space, and hierarchy. Even more surprisingly, a range of recent research provides strong evidence that more expressive representations make possible causal inferences that are otherwise impossible and remove key biases that would otherwise afflict more naive inferences. New research on causal inference should target increases in expressiveness to improve accuracy and effectiveness.




How to Cite

Jensen, D. D. (2021). Improving Causal Inference by Increasing Model Expressiveness. Proceedings of the AAAI Conference on Artificial Intelligence, 35(17), 15053-15057. Retrieved from



Senior Member Presentation: Blue Sky Papers