Fusing Domain-Specific Content from Large Language Models into Knowledge Graphs for Enhanced Zero Shot Object State Classification
DOI:
https://doi.org/10.1609/aaaiss.v3i1.31190Keywords:
Large Language Models, Zero-shot Learning, Object State Classification, Embeddings Fusion, Zero-shot PromptingAbstract
Domain-specific knowledge can significantly contribute to addressing a wide variety of vision tasks. However, the generation of such knowledge entails considerable human labor and time costs. This study investigates the potential of Large Language Models (LLMs) in generating and providing domain-specific information through semantic embeddings. To achieve this, an LLM is integrated into a pipeline that utilizes Knowledge Graphs and pre-trained semantic vectors in the context of the Vision-based Zero-shot Object State Classification task. We thoroughly examine the behavior of the LLM through an extensive ablation study. Our findings reveal that the integration of LLM-based embeddings, in combination with general-purpose pre-trained embeddings, leads to substantial performance improvements. Drawing insights from this ablation study, we conduct a comparative analysis against competing models, thereby highlighting the state-of-the-art performance achieved by the proposed approach.Downloads
Published
2024-05-20
How to Cite
Gouidis, F., Papantoniou, K., Papoutsakis, K., Patkos, T., Argyros, A., & Plexousakis, D. (2024). Fusing Domain-Specific Content from Large Language Models into Knowledge Graphs for Enhanced Zero Shot Object State Classification. Proceedings of the AAAI Symposium Series, 3(1), 115-124. https://doi.org/10.1609/aaaiss.v3i1.31190
Issue
Section
Empowering Machine Learning and Large Language Models with Domain and Commonsense Knowledge