Evolution of Collective Fairness in Hybrid Populations of Humans and Agents


  • Fernando P. Santos University of Lisbon
  • Jorge M. Pacheco University of Minho
  • Ana Paiva University of Lisbon
  • Francisco C. Santos University of Lisbon




Fairness plays a fundamental role in decision-making, which is evidenced by the high incidence of human behaviors that result in egalitarian outcomes. This is often shown in the context of dyadic interactions, resorting to the Ultimatum Game. The peculiarities of group interactions – and the corresponding effect in eliciting fair actions – remain, however, astray. Focusing on groups suggests several questions related with the effect of group size, group decision rules and the interrelation of human and agents’ behaviors in hybrid groups. To address these topics, here we test a Multiplayer version of the Ultimatum Game (MUG): proposals are made to groups of Responders that, collectively, accept or reject them. Firstly, we run an online experiment to evaluate how humans react to different group decision rules. We observe that people become increasingly fair if groups adopt stricter decision rules, i.e., if more individuals are required to accept a proposal for it to be accepted by the group. Secondly, we propose a new analytical model to shed light on how such behaviors may have evolved. Thirdly, we adapt our model to include agents with fixed behaviors. We show that including hardcoded Pro-social agents favors the evolutionary stability of fair states, even for soft group decision rules. This suggests that judiciously introducing agents with particular behaviors in a population may leverage long-term social benefits.




How to Cite

Santos, F. P., Pacheco, J. M., Paiva, A., & Santos, F. C. (2019). Evolution of Collective Fairness in Hybrid Populations of Humans and Agents. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 6146-6153. https://doi.org/10.1609/aaai.v33i01.33016146



AAAI Technical Track: Multiagent Systems