ErfAct and Pserf: Non-monotonic Smooth Trainable Activation Functions
Keywords:Machine Learning (ML), Computer Vision (CV)
AbstractAn activation function is a crucial component of a neural network that introduces non-linearity in the network. The state-of-the-art performance of a neural network depends also on the perfect choice of an activation function. We propose two novel non-monotonic smooth trainable activation functions, called ErfAct and Pserf. Experiments suggest that the proposed functions improve the network performance significantly compared to the widely used activations like ReLU, Swish, and Mish. Replacing ReLU by ErfAct and Pserf, we have 5.68% and 5.42% improvement for top-1 accuracy on Shufflenet V2 (2.0x) network in CIFAR100 dataset, 2.11% and 1.96% improvement for top-1 accuracy on Shufflenet V2 (2.0x) network in CIFAR10 dataset, 1.0%, and 1.0% improvement on mean average precision (mAP) on SSD300 model in Pascal VOC dataset.
How to Cite
Biswas, K., Kumar, S., Banerjee, S., & Pandey, A. K. (2022). ErfAct and Pserf: Non-monotonic Smooth Trainable Activation Functions. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 6097-6105. https://doi.org/10.1609/aaai.v36i6.20557
AAAI Technical Track on Machine Learning I