Supervisor: Jieying Chen (j.chen2@vu.nl)
Large Language Models (LLMs) excel in many NLP tasks but struggle with reasoning and explainability, critical in domains like biomedical and geological sciences. Ontologies, known for their reasoning capabilities, remain essential, but aligning unstructured text with ontological axioms is complex and labor-intensive. This project tackles the problem of ontology text alignment by integrating BERT models and generative LLMs into a Retrieval-Augmented Generation (RAG) framework. With semantic enhancement via atomic decomposition, we aim to streamline the alignment process. Initial benchmarks in geology and biomedicine show promising results.
Develop and evaluate a RAG-based framework for ontology text alignment, focusing on:
A functional framework, or/and benchmark and performance comparison with existing methods. (modificato)
For more project descriptions, please check here: https://docs.google.com/document/d/1S8JdCk_Re0F189RaBjadVd8cEQwZ9sglwOMZoZioOQ0/edit?usp=sharing