Benchmarking Ontology Modularity and QA Using Real-world Ontologies
Supervisor: Jieying Chen (j.chen2@vu.nl)
Abstract
Modularity within ontologies—creating smaller, purpose-specific ontology modules—enhances their maintainability, usability, and integration. On the other hand, the use of Question-Answering (QA) systems to interact with these ontologies has surged, necessitating the establishment of benchmarks to evaluate their performance. The incorporation of real-world data into such benchmarks is crucial for understanding the practical utility and scalability of ontology modularity and associated QA systems.
Objectives
- Undertake a comprehensive study of existing benchmarks in ontology modularity and QA, identifying gaps, especially in terms of real-world applicability.
- Aggregate a diverse set of real-world data from varied domains, ensuring it encompasses a broad spectrum of ontology structures and knowledge representation challenges.
- Design and formulate benchmarks that integrate the collected real-world data to evaluate ontology modularity in terms of cohesion, coupling, granularity, and more.
- Develop or incorporate existing QA systems and test them against the created benchmarks, focusing on the accuracy, relevance, and comprehensiveness of answers.
- Engage domain experts to validate the quality and relevance of benchmarks, ensuring they truly reflect real-world challenges and applications.
- Comparative Analysis: Position the newly created benchmarks against established ones, determining their relative strengths, weaknesses, and utility in evaluating ontology modularity and QA systems.
References
- Jieying Chen, Michel Ludwig, Yue Ma, Dirk Walther: Zooming in on Ontologies: Minimal Modules and Best Excerpts. ISWC (1) 2017: 173-189
- Yujiao Zhou, Yavor Nenov, Bernardo Cuenca Grau and Ian Horrocks Pay-as-you-go OWL Query Answering Using a Triple Store. AAAI 2014.