APPROACH TO THE SYNTHESIS OF NEURAL NETWORK STRUCTURE DURING CLASSIFICATION

Authors

  • Avazjon R. Marakhimov
  • Kabul K. Khudaybergenov

DOI:

https://doi.org/10.47839/ijc.19.1.1689

Keywords:

Neural network, hidden layer, MLP model, sigmoidal function, classification, pattern recognition.

Abstract

Evaluating the number of hidden neurons necessary for solving of pattern recognition and classification tasks is one of the key problems in artificial neural networks. Multilayer perceptron is the most useful artificial neural network to estimate the functional structure in classification. In this paper, we show that artificial neural network with a two hidden layer feed forward neural network with d inputs, d neurons in the first hidden layer, 2d+2 neurons in the second hidden layer, k outputs and with a sigmoidal infinitely differentiable function can solve classification and pattern problems with arbitrary accuracy. This result can be applied to design pattern recognition and classification models with optimal structure in the number of hidden neurons and hidden layers. The experimental results over well-known benchmark datasets show that the convergence and the accuracy of the proposed model of artificial neural network are acceptable. Findings in this paper are experimentally analyzed on four different datasets from machine learning repository.

References

P. Baldi, R. Vershynin, “The capacity of feedforward neural networks,” Neural Networks, vol. 116, pp. 288-311, 2019.

V. E. Ismailov, “Approximation by neural networks with weights varying on a finite set of directions,” Journal of Mathematical Analysis and Applications, vol. 389, issue 1, pp. 72-83, 2012.

Z. Chen, F. Cao, “The approximation operators with sigmoidal functions,” Computers & Mathematics with Applications, vol. 58, issue 4, pp. 758-765, 2009.

A. Pinkus, “Approximation theory of the MLP model in neural networks,” Acta Numerica, vol. 8, pp. 143-195, 1999.

R. Yamashita, M. Nishio, R.K. Do, K. Togashi, “Convolutional neural networks: an overview and application in radiology,” Insights Imaging, vol. 9, issue 4, pp. 611-629, 2018.

M.J. Diamantopoulou, V.Z. Antonopoulos, D.M. Papamichail, “Cascade correlation artificial neural networks for estimating missing monthly values of water quality parameters in rivers,” Water Resources Management, vol. 21, issue 3, pp. 649-662, 2007.

Q. Zhang, L.T. Yang, Z. Chen, P. Li, “A survey on deep learning for big data,” Information Fusion, vol. 42, pp. 146-157, 2018.

W. Liu, Z. Wang, X. Liu, N. Zeng, Y. Liu, F.E. Alsaadi, “A survey of deep neural network architectures and their applications,” Neurocomputing, vol. 234, pp. 11-26, 2017.

Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, P.S. Yu, “A comprehensive survey on graph neural networks,” 2019. ArXiv, abs/1901.00596.

B. Chandra, R.K. Sharma, “On improving recurrent neural network for image classification,” Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, 2017, pp. 1904-1907.

P. Doupe, J.H. Faghmous, S. Basu, “Machine learning for health services researchers,” Value in Health, vol. 22, issue 7, pp. 808-815, 2019.

M.D. Becker, S. Kasper, B. Böckmann, K. Jöckel, I. Virchow, “Natural language processing of German clinical colorectal cancer notes for guideline-based treatment evaluation,” International Journal of Medical Informatics, vol. 127, pp. 141-146, 2019.

W. Tam, Z. Yang, “Neural parallel engine: A toolbox for massively parallel neural signal processing,” Journal of Neuroscience Methods, vol. 301, pp. 18-33, 2018.

P.R. Kurka, A.A. Salazar, “Applications of image processing in robotics and instrumentation,” Mechanical Systems and Signal Processing, vol. 124, pp. 142-169, 2019.

A.K. Maier, C. Syben, T. Lasser, C. Riess, “A gentle introduction to deep learning in medical image processing,” Zeitschrift für Medizinische Physik, vol. 29, issue 2, pp. 86-101, 2019.

J. Zhang, S.O. Williams, H. Wang, “Intelligent computing system based on pattern recognition and data mining algorithms,” Sustainable Computing: Informatics and Systems, vol. 20, pp. 192-202, 2018.

J. Gola, D. Britz, T.M. Staudt, M. Winter, A.S. Schneider, M. Ludovici, F. Mücklich, “Advanced microstructure classification by data mining methods,” Computational Materials Science, vol. 148, pp. 324-335, 2018.

Y. Lu, J. Yang, Q. Wang, Z. Huang. “The upper bound of the minimal number of hidden neurons for the parity problem in binary neural networks,” Science China Information Sciences, vol. 55, issue 7, pp. 1579-1587, 2012.

S. Huang, Y. Huang, “Bounds on number of hidden neurons of multilayer perceptrons in classification and recognition,” Proceedings of the IEEE International Symposium on Circuits and Systems, New Orleans, LA, USA, May 1-3 1990, pp. 2500-2503

S. Tamura, “Capabilities of a three-layer feedforward neural network,” Proceedings of the IEEE International Joint Conference on Neural Networks, Singapore, Singapore, November 18-21, 1991, pp. 2757-2762.

S. Tamura, M. Tateishi, “Capabilities of a four-layered feedforward neural network: Four layers versus three,” IEEE Transactions on Neural Networks, vol. 8, issue 2, pp. 251-255, 1997.

G. Huang, H.A. Babri, “Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions,” IEEE Transactions on Neural Networks, vol. 9, issue 1, pp. 224-229, 1998.

G. Huang, “Learning capability and storage capacity of two-hidden-layer feedforward networks,” IEEE Transactions on Neural Networks, vol. 14, issue 2, pp. 274–281, 2003.

E.J. Teoh, C. Xiang, K.C. Tan, “Estimating the number of hidden neurons in a feedforward network using the singular value decomposition,” IEEE Transactions on Neural Networks, vol. 17, issue 6, pp. 1623-1629, 2006.

S. Omatu, “Classification of mixed odors using a layered neural network,” International Journal of Computing, vol. 16, issue 1, pp. 41-48, 2017.

D.C. Psichogios, L.H. Ungar, “SVD-NET: An algorithm that automatically selects network structure,” IEEE Transactions on Neural Networks, vol. 5, issue 3, pp. 513-515, 1994.

J.D. Santos, G.D. Barreto, C.M. Medeiros, “Estimating the number of hidden neurons of the MLP using singular value decomposition and principal components analysis: a novel approach,” Proceedings of the 2010 Eleventh Brazilian Symposium on Neural Networks, Sao Paulo, Brazil, October 23-28 2010, pp. 19-24.

N. Jiang, Z. Zhang, X. Ma, J. Wang, “The lower bound on the number of hidden neurons in multi-valued multi-threshold neural networks,” Proceedings of the 2008 Second International Symposium on Intelligent Information Technology Application, Shanghai, China, January 6’2008, pp. 103-107.

V. Kurkova, “Kolmogorov’s theorem is relevant,” Neural Computation, vol. 3, pp. 617-622, 1991.

V. Kurkova, “Kolmogorov’s theorem and multilayer neural networks,” Neural Networks, vol. 5, issue 3, pp. 501-506, 1992.

V.E. Ismailov, “On the approximation by neural networks with bounded number of neurons in hidden layers,” Journal of Mathematical Analysis and Applications, vol. 417, issue 2, pp. 963-969, 2014.

V. Maiorov, A. Pinkus, “Lower bounds for approximation by MLP neural networks,” Neurocomputing, vol. 25, pp. 81-91, 1999.

Ch. Zhixiang, C. Feilong, “The approximation operators with sigmoidal functions,” Computers & Mathematics with Applications, vol. 58, issue 4, pp. 758-765, 2009.

Downloads

Published

2020-03-31

How to Cite

Marakhimov, A. R., & Khudaybergenov, K. K. (2020). APPROACH TO THE SYNTHESIS OF NEURAL NETWORK STRUCTURE DURING CLASSIFICATION. International Journal of Computing, 19(1), 20-26. https://doi.org/10.47839/ijc.19.1.1689

Issue

Section

Articles