Designing a Benchmark and Auto-Evaluator for Extracting Relevant Axioms from User Input

Supervisor: Jieying Chen (j.chen2@vu.nl)

Abstract

Ontology extraction is a cornerstone in processes for ontology construction, semantic reasoning, and the augmentation of knowledge bases. With the burgeoning volume of user-contributed content and the intricate variance in knowledge representations, there’s an emergent requirement for refined tools capable of extracting and validating the relevancy of axioms derived from user inputs.

However, creating a ground truth dataset through human annotation is both resource-intensive and time-consuming. Therefore, the indispensability of an auto-evaluator not only streamlines the extraction process but also addresses the challenges of scalability and efficiency. This master’s thesis aims to navigate this landscape, proposing the development of a benchmark dataset alongside an innovative auto-evaluator to meet these challenges head-on.

Objectives

  1. Study current methodologies and tools focused on ontology extraction, relevancy determination, and automated evaluation to comprehend the existing landscape and identify gaps.
  2. Create a diverse dataset encompassing various domains, serving as a gold standard for evaluating axiom extraction techniques.
  3. Develop an automated evaluation tool that can assess the relevancy and accuracy of axioms extracted from user input, comparing them against the benchmark dataset.
  4. Define and implement metrics that capture the precision, recall, F1 score, and relevancy of the extracted axioms, ensuring a holistic assessment.
  5. Employ the auto-evaluator on diverse user inputs, comparing its assessments with manual evaluations to measure its efficacy and reliability.

References