Unlock the Future of Autonomous Driving with Expert Researchers
The Interdisciplinary Centre for Security, Reliability and Trust (SnT) at the University of Luxembourg is a leading international research and innovation centre in secure, reliable and trustworthy ICT systems and services.
Large vision-language models (LVLMs) can describe driving scenes and support decisions, but they sometimes hallucinate objects, relations, or events that are not present. In a safety-critical domain, reducing hallucinations and improving robustness and trustworthiness are essential.
This PhD project aims to develop novel methods to detect and mitigate hallucinations in video-based LVLMs for autonomous driving tasks.
* Evaluate and improve model performance using innovative approaches and techniques
* Analyze and propose strategies to enhance model reliability and robustness under diverse conditions
* Benchmark the proposed system against state-of-the-art methods for hallucination detection
In this role, you will be working on developing novel solutions to reduce hallucinations and improve model trustworthiness, which will contribute significantly to the development of safe and reliable autonomous driving systems.
We offer a unique opportunity to work on a cutting-edge project that has real-world implications and contributes to the advancement of artificial intelligence and machine learning.