A Framework to Handle Uncertainties of Machine Learning Models in Compliance with ISO 26262
View/ Open
vasudevan_et_al_2022.pdf (461.1Kb)
Download
Publication date
2022Keyword
Artificial intelligenceUncertainty
Robustness
Certification
Machine learning
Evidential deep learning
Rights
(c) 2022 Springer Cham. Full-text reproduced in accordance with the publisher's self-archiving policy.Peer-Reviewed
YesOpen Access status
openAccessAccepted for publication
2021-09-10
Metadata
Show full item recordAbstract
Assuring safety and thereby certifying is a key challenge of many kinds of Machine Learning (ML) Models. ML is one of the most widely used technological solutions to automate complex tasks such as autonomous driving, traffic sign recognition, lane keep assist etc. The application of ML is making a significant contributions in the automotive industry, it introduces concerns related to the safety and security of these systems. ML models should be robust and reliable throughout and prove their trustworthiness in all use cases associated with vehicle operation. Proving confidence in the safety and security of ML-based systems and there by giving assurance to regulators, the certification authorities, and other stakeholders is an important task. This paper proposes a framework to handle uncertainties of ML model to improve the safety level and thereby certify the ML Models in the automotive industry.Version
Accepted manuscriptCitation
Vasudevan V, Abdullatif ARA, Kabir S et al (2022) A Framework to Handle Uncertainties of Machine Learning Models in Compliance with ISO 26262. In: Jansen T, Jensen R, Mac Parthaláin N et al (Eds) Advances in Computational Intelligence Systems. UKCI 2021. Advances in Intelligent Systems and Computing. Vol 1409: 508-518. Springer, Cham.Link to Version of Record
https://doi.org/10.1007/978-3-030-87094-2_45Type
Book chapterae974a485f413a2113503eed53cd6c53
https://doi.org/10.1007/978-3-030-87094-2_45