Assuring AI in Autonomous Driving: Challenges and Emerging Approaches
This program is tentative and subject to change.
Artificial intelligence plays a critical role in autonomous driving, yet it introduces significant assurance challenges. Unlike traditional software, AI lacks explicit specifications and is applied to inherently uncertain, complex, and open problems such as road environment perception. Furthermore, modern AI systems are often opaque, making interpretability and verification difficult. These issues complicate the development of assurance methods that can ensure safety and reliability.
In this talk, I will review current approaches and industry standards for AI assurance in autonomous driving, including scenario-based testing and uncertainty estimation. I will discuss assurance methods for both hybrid systems—where AI functions as a component within a classical software stack—and end-to-end AI-driven systems. Finally, I will explore emerging directions for handling edge-case scenarios, including the use of foundation models and dual-processing architectures as potential solutions for enhancing the safety and robustness of end-to-end AI systems.
This program is tentative and subject to change.
Sun 27 AprDisplayed time zone: Eastern Time (US & Canada) change
09:00 - 10:30 | |||
09:00 90mKeynote | Assuring AI in Autonomous Driving: Challenges and Emerging Approaches Research Track |