Research Article
Towards Functional Safety Compliance of Recurrent Neural Networks
@INPROCEEDINGS{10.4108/eai.20-11-2021.2314139, author={Davide Bacciu and Antonio Carta and Daniele Di Sarli and Claudio Gallicchio and Vincenzo Lomonaco and Salvatore Petroni}, title={Towards Functional Safety Compliance of Recurrent Neural Networks}, proceedings={Proceedings of the 1st International Conference on AI for People: Towards Sustainable AI, CAIP 2021, 20-24 November 2021, Bologna, Italy}, publisher={EAI}, proceedings_a={CAIP}, year={2021}, month={12}, keywords={functional safety dependability recurrent neural networks autonomous driving safety performance indicators}, doi={10.4108/eai.20-11-2021.2314139} }
- Davide Bacciu
Antonio Carta
Daniele Di Sarli
Claudio Gallicchio
Vincenzo Lomonaco
Salvatore Petroni
Year: 2021
Towards Functional Safety Compliance of Recurrent Neural Networks
CAIP
EAI
DOI: 10.4108/eai.20-11-2021.2314139
Abstract
Deploying Autonomous Driving systems requires facing some novel challenges for the Automotive industry. One of the most critical aspects that can severely compromise their deployment is Functional Safety. The ISO 26262 standard provides guidelines to ensure Functional Safety of road vehicles. However, this standard is not suitable to develop Artificial Intelligence based systems such as systems based on Recurrent Neural Networks (RNNs). To address this issue, in this paper we propose a new methodology, composed of three steps. The first step is the robustness evaluation of the RNN against inputs perturbations. Then, a proper set of safety measures must be defined according to the model’s robustness, where less robust models will require stronger mitigation. Finally, the functionality of the entire system must be extensively tested according to Safety Of The Intended Functionality (SOTIF) guidelines, providing quantitative results about the occurrence of unsafe scenarios, and by evaluating appropriate Safety Performance Indicators.