Supplementary materials for the paper:
Navigating Ontology Development with Large Language Models @ ESWC2024 See the PDF
The video presentation of the work is available here, powerpoint file
This repository contains documentation and implementations related to the paper "Navigating Ontology Development with Large Language Models." The paper explores various prompting techniques and models for ontology development, focusing on creating a tool that helps ontologists by providing modelling suggestions.
We compared students' submissions of the ontology stories with LLMs to have a comparison baseline click here to see the results. There were 10 groups of students (~2 students per group) and three submissions per story to pass the task (stories were in a master course). LLMs outputs for each prompting technique are stored in "LLM_OWL_outputs" folder.
- Zero-Shot Prompting
- Sub-task Decomposed Prompting: Waterfall approach
- Sub-task Decomposed Prompting: Competency Question by Competency Question (CQbyCQ)
- Chain of Thoughts
- Self Consistency with Chain of Thoughts (CoT-SC)
- Graph of Thoughts (GoT)
Here is the list of models used in the experiments. Click here to see more details
- Open-Source Models
- Lama-7B
- Lama-13B
- Llama2-70B
- Alpaca
- Falcon-7B-Instruct
- WizardLM
- Alpaca-LoRA
- Close-Source Models
- Bard
- GPT3-5
- GPT-4
- Initial Experiment: First Phase - Simple binary criteria to exclude models
- Initial Experiment: Second Phase - Excluding more models and Prompting techniques
Here are the list of stories that used in the experiments. Click here to see stories and their CQs
- Short Story 1: Vegetarians
- Short Story 2: Lena
- Short Story 3: Big Festival
- Story 1: Theatre Festival
- Story 2: Music Production
- Story 3: Hospital Story
We included the results of students' course submissions in the story section. However, due to university regulations, the OWL files of the students' submissions will not be available. But more information about stats and their performance can be found here