Continuous perception for deformable objects understanding
Artículo

Open/ Download
Access note
Acceso Abierto
Publication date
2019
Abstract
We present a robot vision approach to deformable object classification, with direct application to autonomous service robots. Our approach is based on the assumption that continuous perception provides robots with greater visual competence for deformable objects interpretation and classification. Our approach classifies the category of clothing items by continuously perceiving the dynamic interactions of the garment's material and shape as it is being picked up. For this, we extract continuously visual features of a RGB-D video sequence and we fuse features by means of the Locality Constrained Group Sparse Representation (LGSR) algorithm. To evaluate the performance of our approach, we created a fully annotated database featuring 150 garment videos in random configurations. Experiments demonstrate that by continuously observing an object deform, our approach achieves a classification score of 66.7%, outperforming state-of-the-art approaches by a ∼27.3% increase.
Indexation
Artículo de publicación SCOPUS
Identifier
URI: https://repositorio.uchile.cl/handle/2250/172602
DOI: 10.1016/j.robot.2019.05.010
ISSN: 09218890
Quote Item
Robotics and Autonomous Systems, Volumen 118,
Collections