Improving Ontology Requirements Engineering with OntoChat and Participatory Prompting
DOI:
https://doi.org/10.1609/aaaiss.v4i1.31799Abstract
Past ontology requirements engineering (ORE) has primarily relied on manual methods, such as interviews and collaborative forums, to gather user requirements from domain experts, especially in large projects. Current OntoChat offers a framework for ORE that utilises large language models (LLMs) to streamline the process through four key functions: user story creation, competency question (CQ) extraction, CQ filtration and analysis, and ontology testing support. In OntoChat, users are expected to prompt the chatbot to generate user stories. However, preliminary evaluations revealed that they struggle to do this effectively. To address this issue, we experimented with a research method called participatory prompting, which involves researcher-mediated interactions to help users without deep knowledge of LLMs use the chatbot more effectively. The participatory prompting user study produces pre-defined prompt templates based on user query, focusing on creating and refining personas, goals, scenarios, sample data, and data resources for user story. These refined user stories will subsequently be converted into CQs.Downloads
Published
2024-11-08
How to Cite
Zhao, Y., Zhang, B., Hu, X., Ouyang, S., Kim, J., Jain, N., de Berardinis, J., Meroño-Peñuela, A., & Simperl, E. (2024). Improving Ontology Requirements Engineering with OntoChat and Participatory Prompting. Proceedings of the AAAI Symposium Series, 4(1), 253-257. https://doi.org/10.1609/aaaiss.v4i1.31799
Issue
Section
Large Language Models for Knowledge Graph and Ontology Engineering - Short Papers