PYLON: A PyTorch Framework for Learning with Constraints
Keywords:Neuro-symbolic Reasoning, Constraints, Structured Prediction, Probabilistic Reasoning, Logic
AbstractDeep learning excels at learning task information from large amounts of data, but struggles with learning from declarative high-level knowledge that can be more succinctly expressed directly. In this work, we introduce PYLON, a neuro-symbolic training framework that builds on PyTorch to augment procedurally trained models with declaratively specified knowledge. PYLON lets users programmatically specify constraints as Python functions and compiles them into a differentiable loss, thus training predictive models that fit the data whilst satisfying the specified constraints. PYLON includes both exact as well as approximate compilers to efficiently compute the loss, employing fuzzy logic, sampling methods, and circuits, ensuring scalability even to complex models and constraints. Crucially, a guiding principle in designing PYLON is the ease with which any existing deep learning codebase can be extended to learn from constraints in a few lines code: a function that expresses the constraint, and a single line to compile it into a loss. Our demo comprises of models in NLP, computer vision, logical games, and knowledge graphs that can be interactively trained using constraints as supervision.
How to Cite
Ahmed, K., Li, T., Ton, T., Guo, Q., Chang, K.-W., Kordjamshidi, P., Srikumar, V., Van den Broeck, G., & Singh, S. (2022). PYLON: A PyTorch Framework for Learning with Constraints. Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 13152-13154. https://doi.org/10.1609/aaai.v36i11.21711
AAAI Demonstration Track