Evaluating the Efficiency and Performance of Minimalist Neural Network Architectures with Hybrid Activation Functions

Authors

  • Peter Makieu School of Electronic and Information Engineering, Suzhou University of Science and Technology, Jiangsu Province, China.
  • JACKLINE MUTWIRI School of Environmental Engineering, Suzhou University of Science and Technology, Jiangsu Province, China.
  • JUSTIN JUPAYMA MARTOR School of Environmental Engineering, Suzhou University of Science and Technology, Jiangsu Province, China.

DOI:

https://doi.org/10.61841/5zs7sn39

Keywords:

Neural networks, hybrid activation functions, minimalist architectures, adversarial robustness, edge AI, dynamic adaptation

Abstract

References

Bishop, C. M., et al. (2021). The Role of Activation Functions in Neural Network Performance. Machine Learning, 110(3), 619-634.

Cheng, X., et al. (2021). Efficient Neural Network Design for Edge AI: Opportunities and Challenges. IEEE Access, 9, 58219-58234.

Clevert, D.-A., Unterthiner, T., & Hochreiter, S. (2016). Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs). arXiv preprint arXiv:1511.07289. https://doi.org/10.48550/arXiv.1511.07289

Dietterich, T. G. (2021). The Challenges of Machine Learning and the Solutions it Offers. Artificial Intelligence Review, 54(1), 457-480. Link

Frankle, J., & Carbin, M. (2019). The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks. International Conference on Learning Representations. Retrieved from https://openreview.net/forum?id=rJl-b3RcF7

Frankle, J., & Carbin, M. (2019). The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks. International Conference on Learning Representations (ICLR). https://openreview.net/forum?id=rJl-b3RcF7

Frankle, J., & Carbin, M. (2019). The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks. The International Conference on Learning Representations (ICLR). Link Glorot, X., & Bengio, Y. (2010). Understanding the Disharmony Between Dropout and Batch Normalization Through a Neurons-as-Features Perspective. The International Conference on Machine Learning (ICML). Link

Glorot, X., & Bengio, Y. (2010). Understanding the Difficulty of Training Deep Feedforward Neural Networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 9, 249-256. Retrieved from http://proceedings.mlr.press/v9/glorot10a.html

Glorot, X., & Bengio, Y. (2010). Understanding the Difficulty of Training Deep Feedforward Neural Networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS), 9, 249-256. http://proceedings.mlr.press/v9/glorot10a.html

He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep Residual Learning for Image Recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770-778. https://doi.org/10.1109/CVPR.2016.90

Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Tyree, S., ... & Adam, H. (2017). MobileNets: Efficient convolutional neural networks for mobile vision applications. *arXiv preprint arXiv:1704.04861*. https://doi.org/10.48550/arXiv.1704.04861

Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Wu, Y., ... & Adam, H. (2017). MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv preprint arXiv:1704.04861. https://doi.org/10.48550/arXiv.1704.04861

Khan, A., et al. (2021). A Comprehensive Review on Neural Networks: Architectures, Applications, and Future Directions. Artificial Intelligence Review, 54(1), 1-36. Retrieved from https://link.springer.com/article/10.1007/s10462-020-09863-5

Klambauer, G., et al. (2017). Self-Normalizing Neural Networks. The International Conference on Neural Information Processing Systems (NeurIPS). Link

Krizhevsky, A. (2009). Learning multiple layers of features from tiny images. *University of Toronto Technical Report*. https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf

Madry, A., Makelov, A., Schmidt, L., Tsipras, D., & Vladu, A. (2018). Towards deep learning models resistant to adversarial attacks. *Proceedings of the International Conference on Learning Representations (ICLR)*. https://arxiv.org/abs/1706.06083

Molchanov, P., et al. (2020). Importance Estimation for Neural Network Pruning. The IEEE International Conference on Computer Vision (ICCV).

Nair, V., & Hinton, G. E. (2010). Rectified Linear Units Improve Restricted Boltzmann Machines. Proceedings of the 27th International Conference on Machine Learning (ICML-10), 807-814. Retrieved from http://www.icml-2010.org/papers/432.pdf

Pearson, K. (1895). Notes on regression and inheritance in the case of two parents. *Proceedings of the Royal Society of London*, 58, 240-242. https://doi.org/10.1098/rspl.1895.0041

Ramachandran, P., Zoph, B., & Le, Q. V. (2017). Searching for Activation Functions. arXiv preprint arXiv:1710.05941. https://doi.org/10.48550/arXiv.1710.05941

Ramachandran, P., Zoph, B., & Le, Q. V. (2017). Searching for Activation Functions. The International Conference on Learning Representations (ICLR). Link

Smith, L. N. (2018). A Bayesian Approach to Neural Network Hyperparameter Optimization. Proceedings of the 35th International Conference on Machine Learning (ICML), 80, 1-10. http://proceedings.mlr.press/v80/smith18a.html

Smith, L. N. (2018). A Disciplined Approach to Neural Network Hyper-Parameters: Part I - Learning Rate, Batch Size, Momentum, and Weight Decay. arXiv preprint arXiv:1803.09820. Retrieved from https://arxiv.org/abs/1803.09820

Tan, M., & Le, Q. V. (2019). EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. International Conference on Machine Learning, 97, 6105-6114. Retrieved from http://proceedings.mlr.press/v97/tan19a.html

Tan, M., & Le, Q. V. (2019). EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. International Conference on Machine Learning (ICML), 97, 6105-6114. https://arxiv.org/abs/1905.11946

Zhang, X., Li, Y., & Wang, H. (2020). Robust hybrid activations for adversarial defense in deep neural networks. *Neural Networks*, 132, 205-214. https://doi.org/10.1016/j.neunet.2020.08.022

Zhou, Y., et al. (2020). A Survey on Neural Architecture Search. arXiv preprint arXiv:2006.05884. Retrieved from https://arxiv.org/abs/2006.05884

Zhou, Y., et al. (2021). A Review on Efficient Neural Network Architectures: Design, Optimization, and Applications. ACM Computing Surveys, 54(5), 1-34.

Downloads

Published

2025-10-23

How to Cite

Makieu, P., MUTWIRI, J. ., & JUPAYMA MARTOR, J. (2025). Evaluating the Efficiency and Performance of Minimalist Neural Network Architectures with Hybrid Activation Functions. Journal of Advance Research in Computer Science & Engineering (ISSN 2456-3552), 10(2), 12-27. https://doi.org/10.61841/5zs7sn39