- The Experience
- The Programs
- MBA Program
- MSx Program
- PhD Program
- Executive Education
- Stanford Ignite
- Research Fellows Program
- Summer Institute for General Management
- Stanford LEAD Certificate: Corporate Innovation
- Stanford Innovation & Entrepreneurship Certificate
- Executive Program for Nonprofit Leaders
- Executive Program in Social Entrepreneurship
- Executive Program for Education Leaders
- Stanford go.to.market
- Faculty & Research
- Insights
- Alumni
- Events
You are here
Reinforcement Behavior in Repeated Games
Reinforcement Behavior in Repeated Games
1998Working Paper No. 1533
This paper describes behavior conventions that are stable long run outcomes of reinforcement behavior rules in two-person repeated games. Each player plays the repeated game with a fixed but endogenous aspiration, a payoff level that is considered "satisfactory." Choice probabilities are modified by experience: satisfactory payoff experiences positively reinforce probability weights on chosen actions, while unsatisfactory experiences cause other actions to be tried. Our equilibrium notion requires consistency, equality of aspiration levels with long run average payoffs, and stability, robustness of outcomes with respect to random perturbations of each player's state. Our main result indentifies the set of equilibrium pure strategy conventions: this comprises all efficient, strongly individually rational outcomes, and protected Nash equilibira. Extensions to mixed strategy conventions, and applications to games of coordination, cooperation and oigopoly are discussed.