Show simple item record

Authordc.contributor.authorZambrano Ibujes, Jorge Eduardo
Authordc.contributor.authorBenalcazar Villavicencio, Daniel Patricio
Authordc.contributor.authorPérez Flores, Claudio Andrés
Authordc.contributor.authorBowyer, Kevin W.
Admission datedc.date.accessioned2023-07-21T20:53:50Z
Available datedc.date.available2023-07-21T20:53:50Z
Publication datedc.date.issued2022
Cita de ítemdc.identifier.citationIEEE Access (2022)3166910es_ES
Identifierdc.identifier.other10.1109/ACCESS.2022.3166910
Identifierdc.identifier.urihttps://repositorio.uchile.cl/handle/2250/194925
Abstractdc.description.abstractIris is one of the most accurate biometrics. This has led to the successful development of large-scale applications. However, with population growth, and new international applications, datasets are constantly increasing in size, requiring more robust and faster methods. Many descriptors and feature extractors have been developed to extract features that represent the iris biometric pattern. Most of them have been designed by human experts and require a bit-shifting process to increase their robustness to eye rotations, at the expense of increased matching time. We propose a fast iris recognition method that requires a single matching operation and is based on pre-trained image classification models as feature extractors. Our approach uses the filters of the first layers from Convolutional Neural Networks as feature extractors and does not require fine-tuning for new datasets. Since our selected features extracted from convolutional layers encode the iris surface, they have the advantage of not being restricted to specific spatial positions. Thus, it is not necessary to perform a bit-shifting process in the matching stage, eliminating a significant number of computations. Additionally, to mitigate the effect produced by the mask border in rubber-sheet images, we propose filtering the feature map tensors by masking their channels and selecting the most relevant features. Our method was assessed on the publicly available datasets CASIA Iris Lamp and CASIA Iris Thousand, and showed significant improvement both in accuracy and in matching time.es_ES
Patrocinadordc.description.sponsorshipAgencia Nacional de Investigacion y Desarrollo (ANID) FONDECYT 1191610 AFB180004 ANID/BASAL FB210024 21191614 Department of Electrical Engineering and Advanced Mining Technology Center, Universidad de Chilees_ES
Lenguagedc.language.isoenes_ES
Publisherdc.publisherIEEEes_ES
Type of licensedc.rightsAttribution-NonCommercial-NoDerivs 3.0 United States*
Link to Licensedc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/us/*
Sourcedc.sourceIEEE Accesses_ES
Keywordsdc.subjectFeature extractiones_ES
Keywordsdc.subjectIris recognitiones_ES
Keywordsdc.subjectRubberes_ES
Keywordsdc.subjectIrises_ES
Keywordsdc.subjectBiometrics (access control)es_ES
Keywordsdc.subjectTraininges_ES
Keywordsdc.subjectSupport vector machineses_ES
Keywordsdc.subjectBiometricses_ES
Keywordsdc.subjectBit-shiftinges_ES
Keywordsdc.subjectDeep-learninges_ES
Títulodc.titleIris recognition using low-level CNN layers without training and single matchinges_ES
Document typedc.typeArtículo de revistaes_ES
dc.description.versiondc.description.versionVersión publicada - versión final del editores_ES
dcterms.accessRightsdcterms.accessRightsAcceso abiertoes_ES
Catalogueruchile.catalogadorapces_ES
Indexationuchile.indexArtículo de publícación WoSes_ES


Files in this item

Icon

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivs 3.0 United States
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivs 3.0 United States