Depth Maps-Based Human Segmentation and Action Recognition Using Full-Body Plus Body Color Cues Via Recognizer Engine
Author
dc.contributor.author
Jalal, Ahmad
Author
dc.contributor.author
Kamal, Shaharyar
Author
dc.contributor.author
Azurdia Meza, César
Admission date
dc.date.accessioned
2019-10-30T15:28:56Z
Available date
dc.date.available
2019-10-30T15:28:56Z
Publication date
dc.date.issued
2019
Cita de ítem
dc.identifier.citation
Journal of Electrical Engineering and Technology, Volumen 14, Issue 1, 2019, Pages 455-461
Identifier
dc.identifier.issn
20937423
Identifier
dc.identifier.issn
19750102
Identifier
dc.identifier.other
10.1007/s42835-018-00012-w
Identifier
dc.identifier.uri
https://repositorio.uchile.cl/handle/2250/172423
Abstract
dc.description.abstract
Assessment of human behavior during performance of daily routine actions at indoor areas plays a significant role in healthcare services and smart homes for elderly and disabled people. During this consideration, initially, depth images are captured using depth camera and segment human silhouettes due to color and intensity variation. Features considered spatiotemporal properties and obtained from the human body color joints and depth silhouettes information. Joint displacement and specific-motion features are obtained from human body color joints and side-frame differentiation features are processed based on depth data to improve classification performance. Lastly, recognizer engine is used to recognize different activities. Unlike conventional results that were evaluated using a single dataset, our experimental results have shown state-of-the-art accuracy of 88.9% and 66.70% over two challenging depth datasets. The proposed system should be serviceable with major contributions in different consumer application systems such as smart homes, video surveillance and health monitoring systems.