Decentralized reinforcement learning of robot behaviors
Artículo
Publication date
2018Metadata
Show full item record
Cómo citar
Leottau, David L.
Cómo citar
Decentralized reinforcement learning of robot behaviors
Abstract
A multi-agent methodology is proposed for Decentralized Reinforcement Learning (DRL) of individual behaviors in problems where multi-dimensional action spaces are involved. When using this methodology, sub-tasks are learned in parallel by individual agents working toward a common goal. In addition to proposing this methodology, three specific multi agent DRL approaches are considered: DRL-Independent, DRL Cooperative Adaptive (CA), and DRL-Lenient. These approaches are validated and analyzed with an extensive empirical study using four different problems: 3D Mountain Car, SCARA Real-Time Trajectory Generation, Ball-Dribbling in humanoid soccer robotics, and Ball Pushing using differential drive robots. The experimental validation provides evidence that DRL implementations show better performances and faster learning times than their centralized counterparts, while using less computational resources. DRL-Lenient and DRL-CA algorithms achieve the best final performances for the four tested problems, outperforming their DRL-Independent counterparts. Furthermore, the benefits of the DRLLenient and DRL-CA are more noticeable when the problem complexity increases and the centralized scheme becomes intractable given the available computational resources and training time.
Patrocinador
CONICYT
CONICYT-PCHA/Doctorado Nacional/2013-63130183
FONDECYT
1161500
European Regional Development Fund under the project Robotics 4 Industry 4.0
CZ.02.1.01/0.0/0.0/15_003/0000470
Indexation
Artículo de publicación ISI
Quote Item
Artificial Intelligence, 256 (2018): 130–159
Collections
The following license files are associated with this item: