Benchmarking Ontology Modularity and QA Using Real-world Ontologies

Supervisor: Jieying Chen (


Modularity within ontologies—creating smaller, purpose-specific ontology modules—enhances their maintainability, usability, and integration. On the other hand, the use of Question-Answering (QA) systems to interact with these ontologies has surged, necessitating the establishment of benchmarks to evaluate their performance. The incorporation of real-world data into such benchmarks is crucial for understanding the practical utility and scalability of ontology modularity and associated QA systems.


  1. Undertake a comprehensive study of existing benchmarks in ontology modularity and QA, identifying gaps, especially in terms of real-world applicability.
  2. Aggregate a diverse set of real-world data from varied domains, ensuring it encompasses a broad spectrum of ontology structures and knowledge representation challenges.
  3. Design and formulate benchmarks that integrate the collected real-world data to evaluate ontology modularity in terms of cohesion, coupling, granularity, and more.
  4. Develop or incorporate existing QA systems and test them against the created benchmarks, focusing on the accuracy, relevance, and comprehensiveness of answers.
  5. Engage domain experts to validate the quality and relevance of benchmarks, ensuring they truly reflect real-world challenges and applications.
  6. Comparative Analysis: Position the newly created benchmarks against established ones, determining their relative strengths, weaknesses, and utility in evaluating ontology modularity and QA systems.