Bias in AI: Bias Detection and Mitigation in Large Language Models (Collaboration with UWV)

Supervisor: Jieying Chen (j.chen2@vu.nl)

Abstract:

Large Language Models (LLMs) are widely used in diverse applications ranging from information retrieval to content creation has underscored the importance of their reliability and neutrality. While LLMs have proven to be valuable tools, their training on vast and varied datasets may inadvertently introduce or perpetuate biases present in the data. Employing ontologies, structured representations of knowledge with predefined relationships, can serve as a unique way to validate and rectify biased outputs by benchmarking generated answers against a standardized knowledge base.

Objectives

  1. Investigate existing methodologies for bias detection in LLMs, their limitations, and the potential use of ontologies as validators.
  2. Design and curate a comprehensive ontology capturing unbiased representations of various knowledge domains, emphasizing those particularly prone to bias.
  3. Develop a framework that utilizes the curated ontology to compare and contrast the outputs generated by LLMs, highlighting potential deviations that indicate bias.
  4. Design metrics to measure the degree and nature of bias in LLM outputs, providing a standardized way to assess and compare biases across different models.
  5. Propose and implement strategies to rectify identified biases in LLMs, leveraging ontology as a guide for correct and neutral answers.
  6. Test the developed bias mitigation techniques using real-world scenarios and diverse datasets to assess their efficacy and robustness. (modificato)

Other projects

For more project descriptions, please check here: https://docs.google.com/document/d/1S8JdCk_Re0F189RaBjadVd8cEQwZ9sglwOMZoZioOQ0/edit?usp=sharing