Continuous perception for deformable objects understanding
Author
dc.contributor.author
Martínez, Luz
Author
dc.contributor.author
Ruiz del Solar, Javier
Author
dc.contributor.author
Sun, Li
Author
dc.contributor.author
Siebert, J. Paul
Author
dc.contributor.author
Aragon-Camarasa, Gerardo
Admission date
dc.date.accessioned
2019-10-30T15:40:22Z
Available date
dc.date.available
2019-10-30T15:40:22Z
Publication date
dc.date.issued
2019
Cita de ítem
dc.identifier.citation
Robotics and Autonomous Systems, Volumen 118,
Identifier
dc.identifier.issn
09218890
Identifier
dc.identifier.other
10.1016/j.robot.2019.05.010
Identifier
dc.identifier.uri
https://repositorio.uchile.cl/handle/2250/172602
Abstract
dc.description.abstract
We present a robot vision approach to deformable object classification, with direct application to autonomous service robots. Our approach is based on the assumption that continuous perception provides robots with greater visual competence for deformable objects interpretation and classification. Our approach classifies the category of clothing items by continuously perceiving the dynamic interactions of the garment's material and shape as it is being picked up. For this, we extract continuously visual features of a RGB-D video sequence and we fuse features by means of the Locality Constrained Group Sparse Representation (LGSR) algorithm. To evaluate the performance of our approach, we created a fully annotated database featuring 150 garment videos in random configurations. Experiments demonstrate that by continuously observing an object deform, our approach achieves a classification score of 66.7%, outperforming state-of-the-art approaches by a ∼27.3% increase.