Ethical Dilemmas for Adaptive Persuasion Systems
DOI:
https://doi.org/10.1609/aaai.v30i1.9803Keywords:
AI and ethics, persuasion systems, moral dilemmasAbstract
A key acceptability criterion for artificial agents will be the possible moral implications of their actions. In particular, intelligent persuasive systems (systems designed to influence humans via communication) constitute a highly sensitive topic because of their intrinsically social nature. Still, ethical studies in this area are rare and tend to focus on the output of the required action; instead, this work focuses on the acceptability of persuasive acts themselves.Building systems able to persuade while being ethically acceptable requires that they be capable of intervening flexibly and of taking decisions about which specific persuasive strategy to use. We show how, exploiting a behavioral approach, based on human assessment of moral dilemmas, we obtain results that will lead to more ethically appropriate systems. Experiments we have conducted address the type of persuader, the strategies adopted and the circumstances. Dimensions surfaced that can characterize the interpersonal differences concerning moral acceptability of machine performed persuasion, usable for strategy adaptation. We also show that the prevailing preconceived negative attitude toward persuasion by a machine is not predictive of actual moral acceptability judgement when subjects are confronted with specific cases.