EFFICIENCY ESTIMATION OF PARALLEL ALGORITHM OF ENHANCED HISTORICAL DATA INTEGRATION ON COMPUTATIONAL GRID
DOI:
https://doi.org/10.47839/ijc.4.3.357Keywords:
Sensor drift, integration historical data, neural networks, coarse-grain parallel algorithm, dynamic mapping, computational gridsAbstract
The main feature of neural network using for accuracy improvement of physical quantities (for example, temperature, humidity, pressure etc.) measurement by data acquisition systems is insufficient volume of input data for predicting neural network training at an initial exploitation period of sensors. The authors have proposed the technique of data volume increasing for predicting neural network training using integration of historical data method. In this paper we have proposed enhanced integration historical data method with its simulation results on mathematical models of sensor drift using single-layer and multi-layer perceptrons. We also considered a parallelization technique of enhanced integration historical data method in order to decrease its working time. A modified coarse-grain parallel algorithm with dynamic mapping on processors of parallel computing system using neural network training time as mapping criterion is considered. Fulfilled experiments have showed that modified parallel algorithm is more efficient than basic parallel algorithm with dynamic mapping, which does not use any mapping criterion.References
. A. Sachenko, V. Kochan, V. Turchenko, "Intelligent Distributed Sensor Network,” Proceedings of 15th IEEE Instrumentation and Measurement Technology Conference IMTC/98, St. Paul, USA, 1998, pp. 60-66.
. A. Sachenko, V. Kochan, V. Turchenko, "Instrumentation for Data Gathering,” IEEE I&M Magazine, vol. 6, no. 3, September 2003, pp. 34-40.
. J. Brignell, "Digital compensation of sensors,” Scientific Instruments, vol. 20, no 9, 1987, pp. 1097-1102.
. V. Turchenko, "Neural network-based methods and means for improving the effectiveness of distributed sensor data acquisition and processing networks,” Ph.D. Thesis, National University “Lvivska Politechnika,” Lviv, p. 188, 2001 (in Ukrainian).
. A. Alippi, A. Ferrero, V. Piuri, "Artificial Intelligence for Instruments & Applications,” IEEE I&M Magazine, June 98, pp. 9-17.
. P. Daponte, D. Grimaldi, "Artificial Neural Networks in Measurements,” Measurement, vol. 23, 1998, pp. 93-115.
. A. Sachenko, V. Kochan, V. Turchenko, V. Golovko, J. Savitsky, A. Dunets, T. Laopoulos, "Sensor Errors Prediction Using Neural Networks,” Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks IJCNN'2000, Como (Italy), vol. IV, 2000, pp. 441-446.
. V. Golovko, A. Galushkin, "Neural Networks: training, models and applications,” Moscow, Radiotechnika, p. 256, 2001 (in Russian).
. A. Sachenko, V. Kochan, V. Turchenko, "Sensor Drift Prediction Using Neural Networks,” Proceedings of the International Workshop on Virtual and Intelligent Measurement Systems VIMS'2000, Annapolis, USA, 2000, pp. 88-92.
. V. Turchenko, V. Kochan and A. Sachenko, “Advanced Method of Historical Data Integration Using Neural Networks,” Sensor and Systems, Moscow, vol. 7 (38), 2002, pp. 35-38 (in Russian).
. V. Turchenko, "Static Mapping of Integrating Historical Data Neural Networks on Parallel Computer,” Proceedings of the 16th IASTED International Conference on Parallel and Distributed Computing and Systems, 2004, MIT, Cambridge, MA, USA, pp. 884-889.
. V. Turchenko, "Parallel Algorithm of Dynamic Mapping of Integrating Historical Data Neural Networks,” Information Technologies and Systems, vol. 7, no. 1, 2004, pp. 45-52.
. Patent #50380 Ukraine, IPC 7 G06F15/18, "Method of the training set formation for neural network predicting drift of data acquisition device,” A.Sachenko (UA), V.Kochan (UA), V.Turchenko (UA), V.Golovko (BY), J.Savitsky (BY), T.Laopoulos (GR), filled 04 Jan 2000, issued 15 Nov 2002, p. 14.
. A. Sachenko, V. Kochan, R. Kochan, V. Turchenko, K. Tsahouridis, Th. Laopoulos, "Error Compensation in an Intelligent Sensing Instrumentation System,” 18th IEEE Instrumentation and Measurement Technology Conference IMTC/2001, Budapest, Hungary, May 21-23, 2001, pp. 869-874.
. S. Wang, "Reducing the communication cost in simulating layered neural networks on a hypercube machine,” Proceedings of Parallel Computing, Elsevier, Amsterdam, 1989, pp. 375-380.
. A. Petrowski, G. Dreyfus, C. Girault, "Performance analysis of a pipelined back-propagation parallel algorithm,” IEEE Transactions on Neural Networks, vol. 4, 1993, pp. 970-981.
. H. Paugam-Moisy, "Optimal speedup conditions for a parallel back-propagation algorithm,” Proceedings of CONPAR’92-VAPP V, Lecture Notes in Computer Science, vol. 682, Springer-Verlag, 1992, pp. 719-724.
. H. Hopp, L. Prechelt, "CuPit-2: A Portable parallel programming language for artificial neural networks,” Proceedings of the 15th IMACS World Congress Scientific Computation Modeling and Applied Mathematics, vol. 6, Berlin, 1997, pp. 493-498.
. Z. Hanzalek, "A parallel algorithm for gradient training of feed-forward neural networks,” Parallel Computing, vol. 24, no. 5-6, 1998, pp. 823-839.
. J.M.J. Murre, "Transputers and neural networks: An analysis of implementation constraints and performance,” IEEE Transactions on Neural Networks, vol. 4, no. 2, 1993, pp. 284-292.
. I. Foster, C. Kesselman, "Globus: a metacomputing infrastructure toolkit,” International Journal of Supercomputer Application, vol. 11, no. 2, 1997, pp. 115-128.
. Global Grid Forum webpage. http://www.gridforum.org/
. J. Dongarra, D. Laforenza, S. Orlando (Eds), "Recent Advances in Parallel Virtual Machine and Message Passing Interface,” Lecture Notes in Computer Science, Berlin: Springer-Verlag, 2003, vol. 2840, ISBN 3-540-20149-1.
. N.T. Karonis, B. Toonen, I. Foster, "MPICH-G2: A Grid-enabled implementation of the Message Passing Interface,” Journal of Parallel and Distributed Computing, vol. 63, no. 5, 2003, pp. 551-563.
Downloads
Published
How to Cite
Issue
Section
License
International Journal of Computing is an open access journal. Authors who publish with this journal agree to the following terms:• Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
• Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
• Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.