Kernel Online System for Fast Principal Component Analysis and its Adaptive Learning
DOI:
https://doi.org/10.47839/ijc.20.2.2164Keywords:
Kernel function, Data compression, Neural system, Hebb-Sanger neural network, Oja neuronAbstract
An artificial neural system for data compression that sequentially processes linearly nonseparable classes is proposed. The main elements of this system include adjustable radial-basis functions (Epanechnikov’s kernels), an adaptive linear associator learned by a multistep optimal algorithm, and Hebb-Sanger neural network whose nodes are formed by Oja’s neurons. For tuning the modified Oja’s algorithm, additional filtering (in case of noisy data) and tracking (in case of nonstationary data) properties were introduced. The main feature of the proposed system is the ability to work in conditions of significant nonlinearity of the initial data that are sequentially fed to the system and have a non-stationary nature. The effectiveness of the developed approach was confirmed by the experimental results. The proposed kernel online neural system is designed to solve compression and visualization tasks when initial data form linearly nonseparable classes in general problem of Data Stream Mining and Dynamic Data Mining. The main benefit of the proposed approach is high speed and ability to process data whose characteristics are changed in time.
References
I. D. Bau, L. D. Trefethen, Numerical Linear Algebra, Philadelphia: Society for Industrial and Applied Mathematics, 1997, https://doi.org/10.1137/1.9780898719574.
M. Scholz, M. Fraunholz, J. Selbig, “Nonlinear principal component analysis: Neural network models and applications,” LNCSE 58, Springer, 2007, https://doi.org/10.1007/978-3-540-73750-6_2.
K. Karhunen, Kari, “Uber lineare Methoden in der Wahrscheinlichkeitsrechnung,” Ann. Acad. Sci. Fennicae. Ser. A. I. Math.-Phys., no. 37, pp. 1- 79, 1947. (in German)
B. Simon, Functional Integration and Quantum Physics, Academic Press, 1979.
E. Oja, “A simplified neuron model as a principal component analyzer,” Journal of Math. Biology, vol. 15, pp. 267-273, 1982, https://doi.org/10.1007/BF00275687.
E. Oja, “Neural networks, principal components, and subspaces,” International Journal of Neural Systems, vol. 1, pp. 61-68, 1989, https://doi.org/10.1142/S0129065789000475.
R. Xu, D. C. Wunsch, Clustering, IEEE Press Series on Computational Intelligence, Hoboken, NJ: John Wiley & Sons, Inc., 2009, 370 p.
T. M. Cover, “Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition,” IEEE Trans. on Electronic Computers, vol. 14, pp. 326-334, 1965, https://doi.org/10.1109/PGEC.1965.264137.
Ye. V. Bodyanskiy, A. O. Deineko, F. M. Eze, “Kernel fuzzy Kohonen’s clustering neural network and it’s recursive learning,” Automatic Control and Computer Sciences, vol 52, pp. 166-174, 2018, https://doi.org/10.3103/S0146411618030045.
T. Sanger, “Optimal unsupervised learning in a single layer feedforward neural network,” Neural Networks, vol. 2, issue 6, pp. 459-473, 1989, https://doi.org/10.1016/0893-6080(89)90044-0.
Ye. Bodyanskiy, V. Kolodyazhniy, A. Stephan, “An adaptive learning algorithm for a neuro-fuzzy network,” ed. by B. Reusch Computational Intelligence. Theory and Applications, Berlin Heidelberg: Springer-Verlag, 2001, pp. 68–75, https://doi.org/10.1007/3-540-45493-4_11.
V. A. Epanechnikov, “Nonparametric estimation of multivariate probability density,” Probability Theory and its Application, vol. 14, issue 1, pp. 156-161, 1968, https://doi.org/10.1137/1114019.
A. Cichocki, R. Unbehauen, Neural Networks for Optimization and Signal Processing, Stuttgart, Teubner, 1993.
H. Yin, “Learning nonlinear principal manifolds by self-organising maps,” In: Gorban A.N., Kégl B., Wunsch D.C., Zinovyev A.Y. (eds) Principal Manifolds for Data Visualization and Dimension Reduction. Lecture Notes in Computational Science and Engine, vol. 58, Springer, Berlin, Heidelberg, 2008, pp. 68-95, https://doi.org/10.1007/978-3-540-73750-6_3.
S. Haykin, Neural Networks: A Comprehensive Foundation, New Jersey: Prentice-Hall, 1999.
K.-L. Du, M. N. S. Swamy, Neural Networks and Statistical Learning, London: Springer-Verlag, 2014, 824 p.
R. Kruse, C. Borgelt, F. Klawonn, C. Moewes, M. Steinbrecher, P. Held, Computational Intelligence, Berlin: Springer, 2013, 488 p. https://doi.org/10.1007/978-1-4471-5013-8.
F. M. Ham, I. Kostanic, Principles of Neurocomputing for Science and Engineering, N.Y.: Mc Graw-Hill, Inc., 2001, 642 p.
L. Rutkowski, Computational Intelligence. Methods and Techniques, Berlin, Heidelberg: Springer-Verlag, 2008, 514 p.
L. Ljung, System Identification: Theory for User, N.Y.: Hall, 1999, 519 p.
B. Schölkopf, A. J. Smola, Learning with Kernels, MIT Press, 2002.
Downloads
Published
How to Cite
Issue
Section
License
International Journal of Computing is an open access journal. Authors who publish with this journal agree to the following terms:• Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
• Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
• Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.