EFFICIENCY ANALYSIS OF PARALLEL ROUTINE USING PROCESSOR TIME VISUALIZATION
DOI:
https://doi.org/10.47839/ijc.4.1.319Keywords:
Efficiency analysis, processor time, dynamic mapping, coarse-grain parallelization, neural networks, МРІ, МРЕAbstract
The coarse-grain parallel algorithm of modular neural networks training with dynamic mapping onto processors of parallel computer is described in this paper. Parallelization of the algorithm is done on parallel computer 300 using MPI technology. The efficiency of this algorithm is estimated using modification of MPE visualization library, which measures processor executing time of parallel routines.References
Parallel processing of information: 5 volumes / АS USSR. Phys. – mech. institute. – К.: “Naukova dumka”, 1984-1990. – Vol.5: Problem-oriented and specialized means of information processing / Aksenov А., Aristov V., Barsylovych E. et al.; Eds. B. Malinovsky and V. Hrytsyk. – 504 P.
V. Kumar, A. Grama, A. Gupta, G. Karypis. Introduction to Parallel Computing. – СА (USA): Benjamin/Cummings, 1994.
B.H.V. Topping, J. Sziveri, A. Bahreinejad, J.P.B. Leite, B. Cheng. Parallel processing, neural networks and genetic algorithms // Advances in Engineering Software. – 1998. – Vol. 29, No. 10. – P. 763–786.
Chang L.-C., Chang F.-J. An efficient parallel algorithm for LISSOM neural network // Parallel Computing. – 2002. – Vol. 28, No. 11. – P. 1611-1633.
Estevez P. A., Paugam-Moisy H., Puzenat D. et al. A scalable parallel algorithm for training a hierarchical mixture of neural experts // Parallel Computing. – 2002. – Vol. 28, No. 6. – P. 861-891.
V. Turchenko. Parallel Algorithm of Dynamic Mapping of Integrating Historical Data Neural Networks // Information Technologies and Systems. – 2004. – No. 1. – Vol. 7. – No. 1. – pp. 45-52.
Patent #50380 Ukraine, IPC 7 G06F15/18. Method of the training set formation for neural network predicting drift of data acquisition device / A.Sachenko (UA), V.Kochan (UA), V.Turchenko (UA), V.Golovko (BY), J.Savitsky (BY), T.Laopoulos (GR). – Filled 04 Jan 2000; Issued 15 Nov 2002. – 14 p.
Sachenko, V. Kochan, V. Turchenko. Instrumentation for Data Gathering // IEEE Instrumentation and Measurement Magazine. – 2003. – Vol. 6, No. 3. – P. 34-40.
V. Turchenko. Static Mapping of Integrating Historical Data Neural Networks on Parallel Computer // Proceedings of the 16th IASTED International Conference Parallel and Distributed Computing and Systems. – 2004. – Cambridge (MA, USA). – P. 884-889.
National Science Foundation Science and Technology Center (NSF), MPI: A Message-Passing Interface Standard, 1995 – 239 p.
J. Dongarra, D. Laforenza, S. Orlando (Eds), Recent Advances in Parallel Virtual Machine and Message Passing Interface, Lecture Notes in Computer Science #2840. – Berlin: Springer-Verlag, 2003. – P. 188-195.
V. Turchenko, V. Kochan, A. Sachenko et al. Enhanced method of historical data integration using neural networks // Sensors and Systems. – 2002. – Vol. 7 (38). – P. 35-38.
Chan, W. Gropp, E. Lusk. User’s Guide for MPE: Extensions for MPI Programs. Technical report ANL/MCS-TM-ANL-98/xx, Argonne National Laboratory, 1998, pp. 1-31
S. Moore, D. Cronk, K. London and J. Dongarra. Review of Performance Analysis Tools for MPI Parallel Programs, 2001, 8 p (http://icl.cs.utk.edu/publications/pub-papers/ 2001/perftools-review2.pdf).
V. Turchenko, B. Demchuk. Visualization Tools of Processor Time of Parallel Programs, Scientific Journal of Khmelnitsky National University, 2005, Vol. 2, No. 4, pp. 146-151.
T. Senta, A. Takahashi, T. Kadoi et al. Itanium2 32 way Server System Architecture // NEC Research and Development. – 2003. – Vol. 44. – P. 8-12.
http://www.cs.indiana.edu/classes/b673/notes/HTML/jumpshot.html
Downloads
Published
How to Cite
Issue
Section
License
International Journal of Computing is an open access journal. Authors who publish with this journal agree to the following terms:• Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
• Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
• Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.