Reducing hallucinations in large vision-language models (LVLMs) is crucial for autonomous driving. A robust system to detect and mitigate these inaccuracies can significantly enhance model reliability.
* We are seeking a highly skilled researcher to design novel methods for detecting and localizing hallucinations in LVLM outputs for autonomous driving tasks.
* The successful candidate will investigate strategies to reduce hallucinations or improve model confidence, leveraging their expertise in AI and machine learning.
* Evaluation and benchmarking of the hallucination detection system against state-of-the-art methods under diverse visual and textual conditions will be a key aspect of this project.
This PhD position will be part of the Secure and Reliable Software Engineering and Decision-Making group at the University of Luxembourg, focusing on developing innovative solutions for real-world applications.