We study repeated Bayesian games with communication and observable actions in
which the players’ privately known payoffs evolve according to an irreducible Markov
chain whose transitions are independent across players. Our main result implies that,
generically, any Pareto-efficient payoff vector above a stationary minmax value can be
approximated arbitrarily closely in a perfect Bayesian equilibrium as the discount factor
goes to 1. As an intermediate step, we construct an approximately efficient dynamic
mechanism for long finite horizons without assuming transferable utility.