Self-Supervised Self-Supervision by Combining Deep Learning and Probabilistic Logic
DOI:
https://doi.org/10.1609/aaai.v35i6.16631Keywords:
Neuro-Symbolic AI (NSAI), Unsupervised & Self-Supervised Learning, Probabilistic Graphical ModelsAbstract
Labeling training examples at scale is a perennial challenge in machine learning. Self-supervision methods compensate for the lack of direct supervision by leveraging prior knowledge to automatically generate noisy labeled examples. Deep probabilistic logic (DPL) is a unifying framework for self-supervised learning that represents unknown labels as latent variables and incorporates diverse self-supervision using probabilistic logic to train a deep neural network end-to-end using variational EM. While DPL is successful at combining pre-specified self-supervision, manually crafting self-supervision to attain high accuracy may still be tedious and challenging. In this paper, we propose Self-Supervised Self-Supervision (S4), which adds to DPL the capability to learn new self-supervision automatically. Starting from an initial "seed," S4 iteratively uses the deep neural network to propose new self-supervision. These are either added directly (a form of structured self-training) or verified by a human expert (as in feature-based active learning). Experiments show that S4 is able to automatically propose accurate self-supervision and can often nearly match the accuracy of supervised methods with a tiny fraction of the human effort.Downloads
Published
2021-05-18
How to Cite
Lang, H., & Poon, H. (2021). Self-Supervised Self-Supervision by Combining Deep Learning and Probabilistic Logic. Proceedings of the AAAI Conference on Artificial Intelligence, 35(6), 4978-4986. https://doi.org/10.1609/aaai.v35i6.16631
Issue
Section
AAAI Technical Track Focus Area on Neuro-Symbolic AI