Show simple item record

dc.contributor.authorAbubakar, Aliyu
dc.contributor.authorUgail, Hassan
dc.contributor.authorSmith, K.M.
dc.contributor.authorBukar, Ali M.
dc.contributor.authorElmahmudi, Ali
dc.date.accessioned2022-03-20T06:21:09Z
dc.date.accessioned2022-04-28T08:41:43Z
dc.date.available2022-03-20T06:21:09Z
dc.date.available2022-04-28T08:41:43Z
dc.date.issued2020-12
dc.identifier.citationAbubakar A, Ugail H, Smith KM et al (2020) Burns Depth Assessment Using Deep Learning Features. Journal of Medical and Biological Engineering. 40(6): 923-933.en_US
dc.identifier.urihttp://hdl.handle.net/10454/18941
dc.descriptionYesen_US
dc.description.abstractBurns depth evaluation is a lifesaving task and very challenging that requires objective techniques to accomplish. While the visual assessment is the most commonly used by surgeons, its accuracy reliability ranges between 60 and 80% and subjective that lacks any standard guideline. Currently, the only standard adjunct to clinical evaluation of burn depth is Laser Doppler Imaging (LDI) which measures microcirculation within the dermal tissue, providing the burns potential healing time which correspond to the depth of the injury achieving up to 100% accuracy. However, the use of LDI is limited due to many factors including high affordability and diagnostic costs, its accuracy is affected by movement which makes it difficult to assess paediatric patients, high level of human expertise is required to operate the device, and 100% accuracy possible after 72 h. These shortfalls necessitate the need for objective and affordable technique. Method: In this study, we leverage the use of deep transfer learning technique using two pretrained models ResNet50 and VGG16 for the extraction of image patterns (ResFeat50 and VggFeat16) from a a burn dataset of 2080 RGB images which composed of healthy skin, first degree, second degree and third-degree burns evenly distributed. We then use One-versus-One Support Vector Machines (SVM) for multi-class prediction and was trained using 10-folds cross validation to achieve optimum trade-off between bias and variance. Results: The proposed approach yields maximum prediction accuracy of 95.43% using ResFeat50 and 85.67% using VggFeat16. The average recall, precision and F1-score are 95.50%, 95.50%, 95.50% and 85.75%, 86.25%, 85.75% for both ResFeat50 and VggFeat16 respectively. Conclusion: The proposed pipeline achieved a state-of-the-art prediction accuracy and interestingly indicates that decision can be made in less than a minute whether the injury requires surgical intervention such as skin grafting or not.en_US
dc.language.isoenen_US
dc.rights(c) 2020 The Authors. This is an Open Access article distributed under the Creative Commons CC-BY licence (http://creativecommons.org/licenses/by/4.0/)en_US
dc.subjectSkin burnsen_US
dc.subjectBurn depthsen_US
dc.subjectDeep learningen_US
dc.subjectFeaturesen_US
dc.subjectSVMen_US
dc.subjectClassificationen_US
dc.titleBurns Depth Assessment Using Deep Learning Featuresen_US
dc.status.refereedYesen_US
dc.date.Accepted2020-10-08
dc.date.application2020-10-16
dc.typeArticleen_US
dc.type.versionPublished versionen_US
dc.identifier.doihttps://doi.org/10.1007/s40846-020-00574-z
dc.rights.licenseCC-BYen_US
dc.date.updated2022-03-20T06:21:17Z
refterms.dateFOA2022-04-28T08:42:04Z
dc.openaccess.statusopenAccessen_US


Item file(s)

Thumbnail
Name:
Abubakar2020_Article_BurnsDept ...
Size:
1.065Mb
Format:
PDF
Description:
abubakar_et_al_2020

This item appears in the following Collection(s)

Show simple item record