Visual attention saccadic models learn to emulate gaze patterns from childhood to adulthood
Artículo
Open/ Download
Publication date
2017Metadata
Show full item record
Cómo citar
Le Meur, Olivier
Cómo citar
Visual attention saccadic models learn to emulate gaze patterns from childhood to adulthood
Author
Abstract
How people look at visual information reveals fundamental information about themselves, their interests and their state of mind. While previous visual attention models output static 2D saliency maps, saccadic models aim to predict not only where observers look at but also how they move their eyes to explore the scene. In this paper, we demonstrate that saccadic models are a flexible framework that can be tailored to emulate observer's viewing tendencies. More specifically, we use fixation data from 101 observers split into five age groups (adults, 8-10 y.o., 6-8 y.o., 4-6 y.o., and 2 y.o.) to train our saccadic model for different stages of the development of human visual system. We show that the joint distribution of saccade amplitude and orientation is a visual signature specific to each age group, and can be used to generate age-dependent scan paths. Our age-dependent saccadic model does not only output human-like, age-specific visual scan paths, but also significantly outperforms other state-of-the-art saliency models. We demonstrate that the computational modeling of visual attention, through the use of saccadic model, can be efficiently adapted to emulate the gaze behavior of a specific group of observers.
Patrocinador
National Natural Science Foundation of China
61471230
Indexation
Artículo de publicación ISI
Quote Item
IEEE Transactions on Image Processing 2017, 26 (10), pp.4777 - 4789
Collections
The following license files are associated with this item: