Learning Uncertainty For Safety-Oriented Semantic Segmentation In Autonomous Driving
Victor Besnier, David Picard, Alexandre Briot
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:09:45
In this paper, we show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving, by triggering a fallback behavior if a target accuracy cannot be guaranteed. We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function. We propose to estimate this dissimilarity by training a deep neural architecture in parallel to the task-specific network. It allows this observer to be dedicated to the uncertainty estimation, and let the task-specific network make predictions. We propose to use self-supervision to train the observer, which implies that our methods does not require additional training data. We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods MCDropout, while delivering better results on safety-oriented evaluation metrics on the CamVid dataset, especially in the case of glare artifacts.