Taking Advice from (Dis)Similar Machines: The Impact of Human-Machine Similarity on Machine-Assisted Decision-Making


  • Nina Grgić-Hlača Max Planck Institute for Software Systems Max Planck Institute for Research on Collective Goods
  • Claude Castelluccia Inria
  • Krishna P. Gummadi Max Planck Institute for Software Systems




Machine-Assisted Decision Making, Decision Support Systems, Advice Taking, Human-Centered Machine Learning, Algorithmic Decision Making


Machine learning algorithms are increasingly used to assist human decision-making. When the goal of machine assistance is to improve the accuracy of human decisions, it might seem appealing to design ML algorithms that complement human knowledge. While neither the algorithm nor the human are perfectly accurate, one could expect that their complementary expertise might lead to improved outcomes. In this study, we demonstrate that in practice decision aids that are not complementary, but make errors similar to human ones may have their own benefits. In a series of human-subject experiments with a total of 901 participants, we study how the similarity of human and machine errors influences human perceptions of and interactions with algorithmic decision aids. We find that (i) people perceive more similar decision aids as more useful, accurate, and predictable, and that (ii) people are more likely to take opposing advice from more similar decision aids, while (iii) decision aids that are less similar to humans have more opportunities to provide opposing advice, resulting in a higher influence on people’s decisions overall.




How to Cite

Grgić-Hlača, N., Castelluccia, C., & Gummadi, K. P. (2022). Taking Advice from (Dis)Similar Machines: The Impact of Human-Machine Similarity on Machine-Assisted Decision-Making. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 10(1), 74-88. https://doi.org/10.1609/hcomp.v10i1.21989