Virtual Assistants Are Unlikely to Reduce Patient Non-Disclosure
DOI:
https://doi.org/10.1609/aies.v7i1.31668Abstract
The ethical use of AI typically involves setting boundaries on its deployment. Ethical guidelines advise against practices that involve deception, privacy infringement, or discriminatory actions. However, ethical considerations can also identify areas where using AI is desirable and morally necessary. For instance, it has been argued that AI could contribute to more equitable justice systems. Another area where ethical considerations can make AI deployment imperative is healthcare. For example, patients often withhold pertinent details from healthcare providers due to fear of judgment. However, utilizing virtual assistants to gather patients' health histories could be a potential solution. Ethical imperatives support using such technology if patients are more inclined to disclose information to an AI system. This article presents findings from several survey studies investigating whether virtual assistants can reduce non-disclosure behaviors. Unfortunately, the evidence suggests that virtual assistants are unlikely to minimize non-disclosure. Therefore, the potential benefits of virtual assistants due to reduced non-disclosure are unlikely to outweigh their ethical risks.Downloads
Published
2024-10-16
How to Cite
Jorgenson, C., Ozkes, A. I., Willems, J., & Vanderelst, D. (2024). Virtual Assistants Are Unlikely to Reduce Patient Non-Disclosure. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 659-669. https://doi.org/10.1609/aies.v7i1.31668
Issue
Section
Full Archival Papers