Enhancing Language Models with Ontology Subsumption Inference

Supervisor: Jieying Chen (j.chen2@vu.nl)

Abstract

The aim of this master’s thesis is to further investigate the potential of pre-trained language models (LMs) in encoding and reasoning with ontology subsumptions. The existing work on LMs as knowledge bases has primarily focused on simple, triple-based relational knowledge bases, neglecting more sophisticated and conceptualized knowledge bases such as OWL ontologies. This thesis explores how LMs can be enhanced to better understand and infer subsumption relationships between ontology concepts.

Objectives

  1. Study and analyze the existing literature on LMs as knowledge bases and ontology subsumption inference.
  2. Understand existing framework to establish a baseline performance.
  3. Explore and propose novel techniques to leverage LMs for improved ontology subsumption inference, considering both atomic and complex concepts.
  4. Evaluate and compare the performance of the enhanced LM models against the baseline using extensive experiments on ontologies of different domains and scales.
  5. Conduct a comprehensive analysis and interpretation of the results obtained to gain insights into the encoding and reasoning capabilities of LMs with respect to ontology subsumptions.
  6. Discuss the implications, limitations, and future directions for further research in the field of LMs and ontology-based knowledge representation.

Reference