Scalable Partial Explainability in Neural Networks via Flexible Activation Functions (Student Abstract)

Authors

  • Schyler C. Sun Cranfield University
  • Chen Li Cranfield University
  • Zhuangkun Wei University of Warwick
  • Antonios Tsourdos Cranfield University
  • Weisi Guo Cranfield University The Alan Turing Institute

DOI:

https://doi.org/10.1609/aaai.v35i18.17946

Keywords:

XAI, Adaptive Activation Function, Qualitative Reasoning, Neuro-symbolic Reasoning

Abstract

Current state-of-the-art neural network explanation methods (e.g. Saliency maps, DeepLIFT, LIME, etc.) focus more on the direct relationship between NN outputs and inputs rather than the NN structure and operations itself, hence there still exists uncertainty over the exact role played by neurons. In this paper, we propose a novel neural network structure with Kolmogorov-Arnold Superposition Theorem based topology and Gaussian Processes based flexible activation function to achieve partial explainability of the neuron inner reasoning. The model feasibility is verified in a case study on binary classification of the banknotes.

Downloads

Published

2021-05-18

How to Cite

Sun, S. C., Li, C., Wei, Z., Tsourdos, A., & Guo, W. (2021). Scalable Partial Explainability in Neural Networks via Flexible Activation Functions (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 35(18), 15899-15900. https://doi.org/10.1609/aaai.v35i18.17946

Issue

Section

AAAI Student Abstract and Poster Program