Contributions to evaluation of machine learning models. Applicability domain of classification models
dc.contributor.advisor | Neagu, Daniel | |
dc.contributor.author | Rado, Omesaad A.M. | |
dc.date.accessioned | 2021-04-22T13:24:08Z | |
dc.date.available | 2021-04-22T13:24:08Z | |
dc.date.issued | 2019 | |
dc.identifier.uri | http://hdl.handle.net/10454/18447 | |
dc.description.abstract | Artificial intelligence (AI) and machine learning (ML) present some application opportunities and challenges that can be framed as learning problems. The performance of machine learning models depends on algorithms and the data. Moreover, learning algorithms create a model of reality through learning and testing with data processes, and their performance shows an agreement degree of their assumed model with reality. ML algorithms have been successfully used in numerous classification problems. With the developing popularity of using ML models for many purposes in different domains, the validation of such predictive models is currently required more formally. Traditionally, there are many studies related to model evaluation, robustness, reliability, and the quality of the data and the data-driven models. However, those studies do not consider the concept of the applicability domain (AD) yet. The issue is that the AD is not often well defined, or it is not defined at all in many fields. This work investigates the robustness of ML classification models from the applicability domain perspective. A standard definition of applicability domain regards the spaces in which the model provides results with specific reliability. The main aim of this study is to investigate the connection between the applicability domain approach and the classification model performance. We are examining the usefulness of assessing the AD for the classification model, i.e. reliability, reuse, robustness of classifiers. The work is implemented using three approaches, and these approaches are conducted in three various attempts: firstly, assessing the applicability domain for the classification model; secondly, investigating the robustness of the classification model based on the applicability domain approach; thirdly, selecting an optimal model using Pareto optimality. The experiments in this work are illustrated by considering different machine learning algorithms for binary and multi-class classifications for healthcare datasets from public benchmark data repositories. In the first approach, the decision trees algorithm (DT) is used for the classification of data in the classification stage. The feature selection method is applied to choose features for classification. The obtained classifiers are used in the third approach for selection of models using Pareto optimality. The second approach is implemented using three steps; namely, building classification model; generating synthetic data; and evaluating the obtained results. The results obtained from the study provide an understanding of how the proposed approach can help to define the model’s robustness and the applicability domain, for providing reliable outputs. These approaches open opportunities for classification data and model management. The proposed algorithms are implemented through a set of experiments on classification accuracy of instances, which fall in the domain of the model. For the first approach, by considering all the features, the highest accuracy obtained is 0.98, with thresholds average of 0.34 for Breast cancer dataset. After applying recursive feature elimination (RFE) method, the accuracy is 0.96% with 0.27 thresholds average. For the robustness of the classification model based on the applicability domain approach, the minimum accuracy is 0.62% for Indian Liver Patient data at r=0.10, and the maximum accuracy is 0.99% for Thyroid dataset at r=0.10. For the selection of an optimal model using Pareto optimality, the optimally selected classifier gives the accuracy of 0.94% with 0.35 thresholds average. This research investigates critical aspects of the applicability domain as related to the robustness of classification ML algorithms. However, the performance of machine learning techniques depends on the degree of reliable predictions of the model. In the literature, the robustness of the ML model can be defined as the ability of the model to provide the testing error close to the training error. Moreover, the properties can describe the stability of the model performance when being tested on the new datasets. Concluding, this thesis introduced the concept of applicability domain for classifiers and tested the use of this concept with some case studies on health-related public benchmark datasets. | en_US |
dc.description.sponsorship | Ministry of Higher Education in Libya | en_US |
dc.language.iso | en | en_US |
dc.rights | <a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/3.0/"><img alt="Creative Commons License" style="border-width:0" src="http://i.creativecommons.org/l/by-nc-nd/3.0/88x31.png" /></a><br />The University of Bradford theses are licenced under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/3.0/">Creative Commons Licence</a>. | eng |
dc.subject | Machine learning | en_US |
dc.subject | Classification algorithms | en_US |
dc.subject | Binary classification | en_US |
dc.subject | Accuracy | en_US |
dc.subject | Model evaluation | en_US |
dc.subject | Model reliability | en_US |
dc.subject | Applicability domain | en_US |
dc.subject | Model robustness | en_US |
dc.subject | Model coverage | en_US |
dc.subject | Healthcare data | en_US |
dc.title | Contributions to evaluation of machine learning models. Applicability domain of classification models | en_US |
dc.type.qualificationlevel | doctoral | en_US |
dc.publisher.institution | University of Bradford | eng |
dc.publisher.department | Faculty of Engineering and Informatics | en_US |
dc.type | Thesis | eng |
dc.type.qualificationname | PhD | en_US |
dc.date.awarded | 2019 | |
refterms.dateFOA | 2021-04-22T13:24:08Z |