Reinforcement learning with restrictions on the action set
Author
dc.contributor.author
Bravo, Mario
Author
dc.contributor.author
Faure, Mathieu
Admission date
dc.date.accessioned
2015-08-05T15:26:21Z
Available date
dc.date.available
2015-08-05T15:26:21Z
Publication date
dc.date.issued
2015
Cita de ítem
dc.identifier.citation
SIAM J. CONTROL OPTIM. Vol. 53, No. 1, pp. 287–312
en_US
Identifier
dc.identifier.other
DOI: 10.1137/130936488
Identifier
dc.identifier.uri
https://repositorio.uchile.cl/handle/2250/132415
General note
dc.description
Artículo de publicación ISI
en_US
Abstract
dc.description.abstract
Consider a two-player normal-form game repeated over time. We introduce an
adaptive learning procedure, where the players only observe their own realized payoff at each stage.
We assume that agents do not know their own payoff function and have no information on the other
player. Furthermore, we assume that they have restrictions on their own actions such that, at each
stage, their choice is limited to a subset of their action set. We prove that the empirical distributions
of play converge to the set of Nash equilibria for zero-sum and potential games, and games where
one player has two actions.
en_US
Patrocinador
dc.description.sponsorship
Fondecyt grant
3130732
Nucleo Milenio Informacion y Coordinacion en Redes
ICM/FIC P10-024F
Complex Engineering Systems Institute
ICM: P-05-004-F
CONICYT: FBO16