Rethinking the Maturity of Artificial Intelligence in Safety-Critical Settings
Artificial intelligence, in the form of machine learning, has the potential to transform many safety-critical applications such as those in transportation and healthcare. However, despite significant investment and impressive demonstrations, such technologies have struggled to live up to their promises. To this end, this article illustrates that machine learning fundamentally lacks the ability to leverage top-down reasoning, a critical element in safety-critical systems. This is especially important in situations where uncertainty can grow very quickly, requiring adaption to unknowns. This fundamental lack of contextual reasoning, combined with a lack of understanding of what constitutes maturity in artificial intelligence-embedded systems, has significantly contributed to the failures of these systems. Demonstrations where safety-critical artificial intelligence-enabled systems function as if they were almost operational should not be a substitute for testing. Instead, companies and regulatory agencies need to work together to develop clear criteria and certification protocols before such technologies are made publicly available.