Decentralized reinforcement learning applied to mobile robots
Author
dc.contributor.author
Leottau, David L.
Author
dc.contributor.author
Vatsyayan, Aashish
Author
dc.contributor.author
Ruiz del Solar, Javier
Author
dc.contributor.author
Babuška, Robert
Admission date
dc.date.accessioned
2019-05-29T13:39:19Z
Available date
dc.date.available
2019-05-29T13:39:19Z
Publication date
dc.date.issued
2017
Cita de ítem
dc.identifier.citation
Lecture Notes in Computer Science (LNCS, volume 9776), 2017
Identifier
dc.identifier.issn
16113349
Identifier
dc.identifier.issn
03029743
Identifier
dc.identifier.other
10.1007/978-3-319-68792-6_31
Identifier
dc.identifier.uri
https://repositorio.uchile.cl/handle/2250/169053
Abstract
dc.description.abstract
In this paper, decentralized reinforcement learning is applied to a control problem with a multidimensional action space. We propose a decentralized reinforcement learning architecture for a mobile robot, where the individual components of the commanded velocity vector are learned in parallel by separate agents. We empirically demonstrate that the decentralized architecture outperforms its centralized counterpart in terms of the learning time, while using less computational resources. The method is validated on two problems: an extended version of the 3-dimensional mountain car, and a ball-pushing behavior performed with a differential-drive robot, which is also tested on a physical setup.