Thesis title: Explaining Entailments from Ontologies

Supervisor: Patrick Koopmann (

Ontologies are an important representation formalism for symbolic AI systems. Ontologies that are formulated in OWL allow to usage of reasoning systems such as HermiT or ELK to infer implicit information from an ontology, or from a dataset that is combined with the ontology. Different to inferences performed by sub-symbolic AI systems, decisions made by such a reasoner are in a way “explainable by design”, because all inferences can be explained solely based on the information available in the ontology. However, in practice, explanations provided to users by the state-of-the-art are not always so easy to understand. Recently, we have been developing newer techniques towards different types of explanations, that are supposed to make reasoning with ontologies truly explainable in practice. In this project, you will investigate alternative inference systems that can be used to create better explanations. In particular, you will develop a system of inference rules that can be used to produce more user-friendly explanations.

The supervisor will give an introduction to the topic (foundations of description logics, rule-based reasoning) at the beginning of the project. To get an idea, you can consult the following paper, that describes the current proof-based explanation services in practice:

Don’t be afraid to contact the supervisor if you would like to have more information on this project or would like to discuss it in more detail in person.