Show simple item record

dc.contributor.authorElmahmudi, Ali A.M.*
dc.contributor.authorUgail, Hassan*
dc.date.accessioned2019-05-28T11:12:33Z
dc.date.available2019-05-28T11:12:33Z
dc.date.issued2019-10
dc.identifier.citationElmahmudi AAM and Ugail H (2019) Deep face recognition using imperfect facial data. Future Generation Computer Systems. 99: 231-225.en_US
dc.identifier.urihttp://hdl.handle.net/10454/17059
dc.descriptionYesen_US
dc.description.abstractToday, computer based face recognition is a mature and reliable mechanism which is being practically utilised for many access control scenarios. As such, face recognition or authentication is predominantly performed using ‘perfect’ data of full frontal facial images. Though that may be the case, in reality, there are numerous situations where full frontal faces may not be available — the imperfect face images that often come from CCTV cameras do demonstrate the case in point. Hence, the problem of computer based face recognition using partial facial data as probes is still largely an unexplored area of research. Given that humans and computers perform face recognition and authentication inherently differently, it must be interesting as well as intriguing to understand how a computer favours various parts of the face when presented to the challenges of face recognition. In this work, we explore the question that surrounds the idea of face recognition using partial facial data. We explore it by applying novel experiments to test the performance of machine learning using partial faces and other manipulations on face images such as rotation and zooming, which we use as training and recognition cues. In particular, we study the rate of recognition subject to the various parts of the face such as the eyes, mouth, nose and the cheek. We also study the effect of face recognition subject to facial rotation as well as the effect of recognition subject to zooming out of the facial images. Our experiments are based on using the state of the art convolutional neural network based architecture along with the pre-trained VGG-Face model through which we extract features for machine learning. We then use two classifiers namely the cosine similarity and the linear support vector machines to test the recognition rates. We ran our experiments on two publicly available datasets namely, the controlled Brazilian FEI and the uncontrolled LFW dataset. Our results show that individual parts of the face such as the eyes, nose and the cheeks have low recognition rates though the rate of recognition quickly goes up when individual parts of the face in combined form are presented as probes.en_US
dc.language.isoenen_US
dc.rights(c) 2019 Elsevier. This is an Open Access article distributed under the Creative Commons CC-BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0/)en_US
dc.subjectFace recognitionen_US
dc.subjectConvolutional neural networksen_US
dc.subjectDeep learningen_US
dc.subjectCosine similarityen_US
dc.titleDeep face recognition using imperfect facial dataen_US
dc.status.refereedYesen_US
dc.date.Accepted2019-04-10
dc.date.application2019-04-27
dc.typeArticleen_US
dc.type.versionPublished versionen_US
dc.identifier.doihttps://doi.org/10.1016/j.future.2019.04.025
refterms.dateFOA2019-05-28T11:12:33Z


Item file(s)

Thumbnail
Name:
el_mahmudi_et_al_2019.pdf
Size:
3.709Mb
Format:
PDF

This item appears in the following Collection(s)

Show simple item record