The AI Field Needs Translational Ethical AI Research
Calls for Ethical AI have become urgent and pervasive, especially as ethical issues surrounding AI products at tech companies are increasingly scrutinized by the public. Yet even after a first wave of responses to these calls coalesced around Ethical AI principles to guide decision-making and a second wave generated technical tools to mitigate specific ethical issues, multiple lines of evidence indicate that these Ethical AI principles and technical tools have only a limited impact on the daily practices of AI users and producers. In other words, there is a big gap between what we publish in academic papers and what AI creators need to generate AI products that reflect society's values. Ethical AI is by no means the only field to have this problem. However, when medical and ecology fields documented similar gaps between their fields’ scientific discoveries and the practices and products that people actually use, they invested tremendous resources into subfields that developed evidence about how to translate what was done in the lab to adopted solutions. I argue in this commentary that it is our research community's moral duty to invest in our own subfield of “Translational Ethical AI” that will determine how best to ensure AI practitioners can implement the Ethical AI technical tools we publish in academic venues in production settings. Further, I offer concrete steps for doing that, drawing on insights gleaned from other translational fields. Closing the “Ethical AI Publication-to-Practice gap” will be a considerable transdisciplinary challenge, but one of the AI research community has the unique expertise, political leverage, and moral responsibility to tackle.
How to Cite
Copyright (c) 2022 Jana Schaich Borg
This work is licensed under a Creative Commons Attribution 4.0 International License.