Show simple item record

dc.contributor.authorAl-Waisy, Alaa S.*
dc.contributor.authorQahwaji, Rami S.R.*
dc.contributor.authorIpson, Stanley S.*
dc.contributor.authorAl-Fahdawi, Shumoos*
dc.date.accessioned2017-09-11T16:26:56Z
dc.date.available2017-09-11T16:26:56Z
dc.date.issued2017
dc.identifier.citationAl-Waisy AS, Qahwaji R, Ipson S et al (2017) A multimodal deep learning framework using local feature representations for face recognition. Machine Vision and Applications. 29(1): 35-54.en_US
dc.identifier.urihttp://hdl.handle.net/10454/13122
dc.descriptionYesen_US
dc.description.abstractThe most recent face recognition systems are mainly dependent on feature representations obtained using either local handcrafted-descriptors, such as local binary patterns (LBP), or use a deep learning approach, such as deep belief network (DBN). However, the former usually suffers from the wide variations in face images, while the latter usually discards the local facial features, which are proven to be important for face recognition. In this paper, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the DBN is proposed to address the face recognition problem in unconstrained conditions. Firstly, a novel multimodal local feature extraction approach based on merging the advantages of the Curvelet transform with Fractal dimension is proposed and termed the Curvelet–Fractal approach. The main motivation of this approach is that theCurvelet transform, a newanisotropic and multidirectional transform, can efficiently represent themain structure of the face (e.g., edges and curves), while the Fractal dimension is one of the most powerful texture descriptors for face images. Secondly, a novel framework is proposed, termed the multimodal deep face recognition (MDFR)framework, to add feature representations by training aDBNon top of the local feature representations instead of the pixel intensity representations. We demonstrate that representations acquired by the proposed MDFR framework are complementary to those acquired by the Curvelet–Fractal approach. Finally, the performance of the proposed approaches has been evaluated by conducting a number of extensive experiments on four large-scale face datasets: the SDUMLA-HMT, FERET, CAS-PEAL-R1, and LFW databases. The results obtained from the proposed approaches outperform other state-of-the-art of approaches (e.g., LBP, DBN, WPCA) by achieving new state-of-the-art results on all the employed datasets.en_US
dc.language.isoenen_US
dc.relation.isreferencedbyhttps://doi.org/10.1007/s00138-017-0870-2en_US
dc.rights© 2017 The Authors. This article is distributed under the terms of the Creative Commons CC-BY License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.en_US
dc.subjectFace recognition; Curvelet transform; Fractal dimension; Fractional Brownian motion; Deep belief network; SDUMLA-HMT database; FERET database; LFW databaseen_US
dc.titleA multimodal deep learning framework using local feature representations for face recognitionen_US
dc.status.refereedYesen_US
dc.date.Accepted2017-07-29
dc.date.application2017-09-04
dc.typeArticleen_US
dc.type.versionPublished versionen_US
refterms.dateFOA2018-07-26T09:03:50Z


Item file(s)

Thumbnail
Name:
al-waisy_et_al_2017.pdf
Size:
2.636Mb
Format:
PDF
Description:
Keep suppressed - cover sheet ...
Thumbnail
Name:
al-waisy_et_al_2017.pdf
Size:
2.511Mb
Format:
PDF
Description:
Main article

This item appears in the following Collection(s)

Show simple item record