‘Beach’ to ‘Bitch’: Inadvertent Unsafe Transcription of Kids’ Content on YouTube

Authors

  • Krithika Ramesh Manipal Institute of Technology
  • Ashiqur R. KhudaBukhsh Rochester Institute of Technology
  • Sumeet Kumar Indian School of Business

DOI:

https://doi.org/10.1609/aaai.v36i11.21470

Keywords:

AI For Social Impact (AISI Track Papers Only)

Abstract

Over the last few years, YouTube Kids has emerged as one of the highly competitive alternatives to television for children's entertainment. Consequently, YouTube Kids' content should receive an additional level of scrutiny to ensure children's safety. While research on detecting offensive or inappropriate content for kids is gaining momentum, little or no current work exists that investigates to what extent AI applications can (accidentally) introduce content that is inappropriate for kids. In this paper, we present a novel (and troubling) finding that well-known automatic speech recognition (ASR) systems may produce text content highly inappropriate for kids while transcribing YouTube Kids' videos. We dub this phenomenon as inappropriate content hallucination. Our analyses suggest that such hallucinations are far from occasional, and the ASR systems often produce them with high confidence. We release a first-of-its-kind data set of audios for which the existing state-of-the-art ASR systems hallucinate inappropriate content for kids. In addition, we demonstrate that some of these errors can be fixed using language models.

Downloads

Published

2022-06-28

How to Cite

Ramesh, K., KhudaBukhsh, A. R., & Kumar, S. (2022). ‘Beach’ to ‘Bitch’: Inadvertent Unsafe Transcription of Kids’ Content on YouTube. Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 12108-12118. https://doi.org/10.1609/aaai.v36i11.21470