A realistic virtual environment for evaluating face analysis systems under dynamic conditions
Author
dc.contributor.author
Correa Pérez, Mauricio
Author
dc.contributor.author
Ruiz del Solar, Javier
Author
dc.contributor.author
Verschae Tannenbaum, Rodrigo
Admission date
dc.date.accessioned
2016-05-22T04:07:55Z
Available date
dc.date.available
2016-05-22T04:07:55Z
Publication date
dc.date.issued
2016
Cita de ítem
dc.identifier.citation
Pattern Recognition 52 (2016) 160–173
en_US
Identifier
dc.identifier.other
DOI: 10.1016/j.patcog.201.11.008
Identifier
dc.identifier.uri
https://repositorio.uchile.cl/handle/2250/138414
General note
dc.description
Artículo de publicación ISI
en_US
Abstract
dc.description.abstract
This paper proposes a new tool for the evaluation of face analysis systems under dynamic experimental conditions. The tool primarily consists of a virtual environment where a virtual agent (e.g., a simulated robot) carries out a face analysis process (e.g. face detection and recognition). This virtual agent can navigate in the virtual environment, where one or more subjects are present, and it can observe the subjects' faces from different distances and angles (yaw, pitch, and roll), and under different illumination conditions (indoor or outdoor). The current view of the agent, i.e. the image that the agent observes, is generated by composing real face and background images acquired prior to their usage in the virtual environment. In the virtual environment, different kinds of agents and agents' trajectories can be simulated, such as an agent navigating in a scene with people looking in different directions (mimicking a home-like environment), an agent performing a circular scanning (such as in a security checkpoint), or a camera-based surveillance system observing a person. In addition, during the recognition process the agent can actively change its viewpoint seeking to improve the recognition results. The proposed tool provides to the developer all functionalities needed to build the evaluation scenario: a set of real face images with real background information, a virtual agent with navigation capabilities, a scenario configuration (number, position and pose of the subjects to be observed), an agent trajectory definition, the generation of the simulated agent's view-dependent images, some basic active vision mechanisms, and the ground truth data (e.g. face id and pose for every observation), allowing the evaluation of face analysis methods under realistic conditions. Three usage examples are presented: the study of the robustness of face detection and face recognition methods under pose variations, and the evaluation of an integrated face analysis system to be used by a service robot. The proposed methodology may be of interest for researchers and developers of face analysis methods, in particular in the robotic and biometrics communities.