Loading...
A Framework to Handle Uncertainties of Machine Learning Models in Compliance with ISO 26262
Vasudevan, Vinod ; Abdullatif, Amr R.A. ; ;
Vasudevan, Vinod
Abdullatif, Amr R.A.
Publication Date
2022
End of Embargo
Supervisor
Rights
(c) 2022 Springer Cham. Full-text reproduced in accordance with the publisher's self-archiving policy.
Peer-Reviewed
Yes
Open Access status
openAccess
Accepted for publication
2021-09-10
Institution
Department
Awarded
Embargo end date
Additional title
Abstract
Assuring safety and thereby certifying is a key challenge of
many kinds of Machine Learning (ML) Models. ML is one of the most
widely used technological solutions to automate complex tasks such as
autonomous driving, traffic sign recognition, lane keep assist etc. The
application of ML is making a significant contributions in the automotive
industry, it introduces concerns related to the safety and security of these
systems. ML models should be robust and reliable throughout and prove
their trustworthiness in all use cases associated with vehicle operation.
Proving confidence in the safety and security of ML-based systems and
there by giving assurance to regulators, the certification authorities, and
other stakeholders is an important task. This paper proposes a framework
to handle uncertainties of ML model to improve the safety level and
thereby certify the ML Models in the automotive industry.
Version
Accepted manuscript
Citation
Vasudevan V, Abdullatif ARA, Kabir S et al (2022) A Framework to Handle Uncertainties of Machine Learning Models in Compliance with ISO 26262. In: Jansen T, Jensen R, Mac Parthaláin N et al (Eds) Advances in Computational Intelligence Systems. UKCI 2021. Advances in Intelligent Systems and Computing. Vol 1409: 508-518. Springer, Cham.
Link to publisher’s version
Link to published version
Link to Version of Record
Type
Book chapter