Authors:
Himanshu Agarwal
1
;
2
;
Rafal Dorociak
2
and
Achim Rettberg
1
;
3
Affiliations:
1
Department of Computing Science, Carl von Ossietzky University Oldenburg, Germany
;
2
HELLA GmbH & Co. KGaA, Lippstadt, Germany
;
3
University of Applied Sciences Hamm-Lippstadt, Lippstadt, Germany
Keyword(s):
Image Classification, Deep Learning, Deep Neural Network, Vulnerability to Misclassification, Automated Driving.
Abstract:
The perception-based tasks in automated driving depend greatly on deep neural networks (DNNs). In context of image classification, the identification of the critical pairs of the target classes that make the DNN highly vulnerable to misclassification can serve as a preliminary step before implementing the appropriate measures for improving the robustness of the DNNs or the classification functionality. In this paper, we propose that the DNN’s vulnerability to misclassifying an input image into a particular incorrect class can be quantified by evaluating the similarity learnt by the trained model between the true class and the incorrect class. We also present the criteria to rank the DNN model’s vulnerability to a particular misclassification as either low, moderate or high. To argue for the validity of our proposal, we conduct an empirical assessment on DNN-based traffic sign classification. We show that upon evaluating the DNN model, most of the images for which it yields an erroneo
us prediction experience the misclassifications to which its vulnerability was ranked as high. Furthermore, we also validate empirically that all those possible misclassifications to which the DNN model’s vulnerability is ranked as high are difficult to deal with or control, as compared to the other possible misclassifications.
(More)