You are here

Aspiration-Based Reinforcement Learning in Repeated Interaction Games: An Overview

Aspiration-Based Reinforcement Learning in Repeated Interaction Games: An Overview

By
Jonathan Bendor, Dilip Mookherjee, Debraj Ray
International Game Theory Review.
2001, Vol. 3, Issue 2-3

In models of aspiration-based reinforcement learning, agents adapt by comparing payoffs achieved from actions chosen in the past with an aspiration level. Though such models are well-established in behavioural psychology, only recently have they begun to receive attention in game theory and its applications to economics and politics. This paper provides an informal overview of a range of such theories applied to repeated interaction games. We describe different models of aspiration formation: where (1) aspirations are fixed but required to be consistent with longrun average payoffs; (2) aspirations evolve based on past personal experience or of previous generations of players; and (3) aspirations are based on the experience of peers. Convergence to non-Nash outcomes may result in either of these formulations. Indeed, cooperative behaviour can emerge and survive in the long run, even though it may be a strictly dominated strategy in the stage game, and despite the myopic adaptation of stage game strategies. Differences between reinforcement learning and evolutionary game theory are also discussed.