BRADFORD SCHOLARS

    • Sign in
    View Item 
    •   Bradford Scholars
    • Engineering and Informatics
    • Engineering and Informatics Publications
    • View Item
    •   Bradford Scholars
    • Engineering and Informatics
    • Engineering and Informatics Publications
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

    All of Bradford ScholarsCommunitiesAuthorsTitlesSubjectsPublication DateThis CollectionAuthorsTitlesSubjectsPublication Date

    My Account

    Sign in

    HELP

    Bradford Scholars FAQsCopyright Fact SheetPolicies Fact SheetDeposit Terms and ConditionsDigital Preservation Policy

    Statistics

    Most Popular ItemsStatistics by CountryMost Popular Authors

    A Framework to Handle Uncertainties of Machine Learning Models in Compliance with ISO 26262

    • CSV
    • RefMan
    • EndNote
    • BibTex
    • RefWorks
    Thumbnail
    Publication date
    2022
    End of Embargo
    2023-11-18
    Author
    Vasudevan, Vinod
    Abdullatif, Amr R.A.
    Kabir, Sohag
    Campean, I. Felician
    Keyword
    Artificial intelligence
    Uncertainty
    Robustness
    Certification
    Machine learning
    Evidential deep learning
    Rights
    (c) 2022 Springer Cham. Full-text reproduced in accordance with the publisher's self-archiving policy.
    Peer-Reviewed
    Yes
    Open Access status
    openAccess
    
    Metadata
    Show full item record
    Abstract
    Assuring safety and thereby certifying is a key challenge of many kinds of Machine Learning (ML) Models. ML is one of the most widely used technological solutions to automate complex tasks such as autonomous driving, traffic sign recognition, lane keep assist etc. The application of ML is making a significant contributions in the automotive industry, it introduces concerns related to the safety and security of these systems. ML models should be robust and reliable throughout and prove their trustworthiness in all use cases associated with vehicle operation. Proving confidence in the safety and security of ML-based systems and there by giving assurance to regulators, the certification authorities, and other stakeholders is an important task. This paper proposes a framework to handle uncertainties of ML model to improve the safety level and thereby certify the ML Models in the automotive industry.
    URI
    http://hdl.handle.net/10454/18707
    Version
    Accepted manuscript
    Citation
    Vasudevan V, Abdullatif ARA, Kabir S et al (2022) A Framework to Handle Uncertainties of Machine Learning Models in Compliance with ISO 26262. In: Jansen T, Jensen R, Mac Parthaláin N et al (Eds) Advances in Computational Intelligence Systems. UKCI 2021. Advances in Intelligent Systems and Computing. Vol 1409: 508-518. Springer, Cham.
    Link to publisher’s version
    https://doi.org/10.1007/978-3-030-87094-2_45
    Type
    Book chapter
    Notes
    The full-text of this book chapter will be released for public view at the end of the publisher embargo on 18 Nov 2023.
    Collections
    Engineering and Informatics Publications

    entitlement

     
    DSpace software (copyright © 2002 - 2022)  DuraSpace
    Quick Guide | Contact Us
    Open Repository is a service operated by 
    Atmire NV
     

    Export search results

    The export option will allow you to export the current search results of the entered query to a file. Different formats are available for download. To export the items, click on the button corresponding with the preferred download format.

    By default, clicking on the export buttons will result in a download of the allowed maximum amount of items.

    To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export.

    After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.