Tillman Radmer & Fabian Hüger & Nico Schmidt

Uncertainty Estimation of Neural Networks

How can a neural network know what it doesn't know? Discover how uncertainty estimation creates a critical safety net for autonomous driving.

Uncertainty Estimation of Neural Networks
#1about 5 minutes

Understanding uncertainty through rare events in driving

Neural networks are more uncertain in rare situations like unusual vehicles on the road because these events are underrepresented in training data.

#2about 3 minutes

Differentiating aleatoric and epistemic uncertainty

Uncertainty is classified into two types: aleatoric (data noise, like blurry edges) and epistemic (model knowledge gaps), which can be reduced with more data.

#3about 3 minutes

Why classification scores are unreliable uncertainty metrics

Neural network confidence scores are often miscalibrated, showing overconfidence at high scores and underconfidence at low scores, making them poor predictors of true accuracy.

#4about 2 minutes

Using a simple alert system to predict model failure

The alert system approach uses a second, simpler model trained specifically to predict when the primary neural network is likely to fail on a given input.

#5about 15 minutes

Using Monte Carlo dropout and student networks for estimation

The Monte Carlo dropout method estimates uncertainty by sampling predictions, and its performance can be accelerated by training a smaller student network to mimic this behavior.

#6about 14 minutes

Applying uncertainty for active learning and corner case detection

An active learning framework uses uncertainty scores to intelligently select the most informative data (corner cases) from vehicle sensors for labeling and retraining models.

#7about 4 minutes

Challenges in uncertainty-based data selection strategies

Key challenges for active learning include determining the right amount of data to select, evaluating performance on corner cases, and avoiding model-specific data collection bias.

#8about 7 minutes

Addressing AI safety and insufficient generalization

Deep neural networks in autonomous systems pose safety risks due to insufficient generalization, unreliable confidence, and brittleness to unseen data conditions.

#9about 8 minutes

Building a safety argumentation framework for AI systems

A safety argumentation process involves identifying DNN-specific concerns, applying mitigation measures like uncertainty monitoring, and providing evidence through an iterative, model-driven development cycle.

Related jobs
Jobs that call for the skills explored in this talk.

Featured Partners

Related Articles

View all articles
CH
Chris Heilmann
With AIs wide open - WeAreDevelopers at All Things Open 2025
Last week our VP of Developer Relations, Chris Heilmann, flew to Raleigh, North Carolina to present at All Things Open . An excellent event he had spoken at a few times in the past and this being the “Lucky 13” edition, he didn’t hesitate to come and...
With AIs wide open - WeAreDevelopers at All Things Open 2025
CH
Chris Heilmann
Exploring AI: Opportunities and Risks for Developers
In today's rapidly evolving tech landscape, the integration of Artificial Intelligence (AI) in development presents both exciting opportunities and notable risks. This dynamic was the focus of a recent panel discussion featuring industry experts Kent...
Exploring AI: Opportunities and Risks for Developers
BR
Benjamin Ruschin
Navigating the AI Shift
AI has had an undeniable impact on all kinds of aspects of life and work, from how we do everyday tasks, to how software is built, how companies operate, and even how work itself is defined. Despite some impressive developments in a relatively short ...
Navigating the AI Shift

From learning to earning

Jobs that call for the skills explored in this talk.