LEARNING TO TEACH TO NEURAL NETWORKS HOW TO LEARN WELL WITH SOH, A SELF-OBSERVING HEURISTIC

Authors

  • Jean-Jacques Mariage

DOI:

https://doi.org/10.47839/ijc.3.1.254

Keywords:

Adaptive structure, cumulative learning, emergence, evolutionary architecture, holism, neural networks growth, self-observation

Abstract

In this ongoing research, we present a Self-Observing Heuristic (SOH). SOH is a hybrid computing method. It roots in natural selection and optimization techniques to provide an environmentally driven evolutionary computation scheme, capable of autonomic cumulative learning. Our aim is to realize an adaptive learning system based on neo- Darwinian evolution of neural units. We proceed in two complementary directions. On one hand, we try to automati- cally compute the costly tuning phase of the configuration and learning parameters of neural networks (NNs). On the other hand, we use meiosis cellular growth as a natural computation technique to bypass palimpsest effects observed when adding new knowledge to previous one. The main idea is to build an event guided growing competitive NN that develops while it learns to tune other NNs' parameters. Other NNs can be models more or less similar - or even iden- tical - to it. The system adapts itself, learning to teach other models how to learn well.

References

T. Ash, and G. Cottrell (1995). “Topology-modifying neural network algorithms”. In Michael A. Arbib, ed., Handbook of Brain Theory and Neural Networks, MIT Press, 990-993.

J. Blackmore, R. Miikkulainen (1993). “Incremental Grid Growing: Encoding High-Dimensional Structure into a Two-Dimensional Feature Map”. Procs. of the IEEE ICNN, San-Francisco, CA.

J. Bruske and G. Sommer (1995). “Dynamic cell structure learns perfectly topology preserving map”. Neural Computation, 7, 845-865.

B. Fritzke (1994). “Growing Cell structures – a self-organizing network for unsupervised and super-vised learning”. Neural Networks 7, (9), 1441-1460.

J. Kangas, T. Kohonen, J. Laaksonen, O. Simula, and O. Venta (1989). “Variants of self orga-nizing maps”. Procs. of the IJCNN'89, II, 517-522.

T. Kohonen (1982). “Self-Organized formation of topologically correct feature maps”. Biological Cybernetics, 43, 59-69.

T. Kohonen (1989). Self-Organization and Associative Memory. Springer Series in Information Sciences, Third. Edition, Springer-Verlag, Berlin.

T. Kohonen (1993). “Things you haven't heard about the Self-Organizing Map”. Procs. of the IJCNN'93, 1147-1156.

T. Kohonen, J. Hynninen, J. Kangas, J. Laaksonen, (1995). “SOM_PAK: The Self Organizing Map program package”. Report A31, Helsinki University of Technology, Laboratory of Computer and Information Science.

G. Lendaris, and C. Paintz (1997) “Training Strategies for Critic and Action Neural Networks in Dual Heuristic Programming Method”. Procs. of the IEEE ICNN'97, 712-717.

B. F. Madore, and W. L. Freedman (1983). “Computer simulations of the Belousov-Zhabotinsky reaction”. Science, 222, 437-438.

B. F. Madore, and W. L. Freedman (1987). “Self-organazing structures”. American Scientist, vol. 75, N° 3, 252-259.

J-J. Mariage (2000). Architectures neuronales evolutives, un etat de l'art. RR CSAR 00-12-01-17, Laboratoire d'IA, universite Paris 8.

J-J. Mariage (2001). De l'Auto-Organization a l'Auto-Observation. Ph.D. Dissertation, Department of Computing Science, AI Laboratory, Paris 8 Uni-versity.

T. Martinetz (1993). “Competitive hebbian learning rule forms perfectly topology preserving maps”. In Stan Gielen and Bert Kappen, Eds., Procs. of the ICANN'93, 427-434.

T. Martinetz, and K. Schulten (1991). “A neural gaz network learns topologies”. In T. Kohonen et al. (Eds.), IEEE ICNN'91, 1, 397-407.

T. Martinetz, and K. Schulten (1994). “To-pology representing networks”. In Neural Networks, 7(3), 505-522.

D. Polani, (1997). “Organization mesures for Self-Organazing maps”. Procs. of WSOM'97, 280-285.

I. Prigogine (1980). From being to becoming: time and complexity in the physical sciences. Freeman.

I. Prigogine, I. Stengers (1984). Order out of chaos: Man's new dialogue with nature. Bantam.

H. Ritter, and K. Schulten (1988). “Extending Kohonen's self-organizing mapping algorithm to learn ballistic movements”. In Neural Computers, R. Eckmiller and C. von der Malsburg Eds., Springer-verlag, 393-406.

T. Trautmann, T. Deneux (1995). “Comparison of dynamic feature map models for environmental monitoring”. Procs. of the ICNN'95, I, 73-78.

F. Varela, E. Thompson, and E. Rosch (1993). The Embodied Mind: Cognitive Science and Human Experience. MIT Press, Cambridge, MA.

TH. Villmann, H.-U. Bauer (1997). “The GSOM-algorithm for growing hypercubical output spaces in self-organizing maps”. Procs. of the WSOM'97, 286-291.

T. Ziemke (1999). Rethinking Grounding. In Riegler, Peschl, von Stein (Eds.), Understanding Representation in the Cognitive Sciences, New York: Plenum Press.

Downloads

Published

2014-08-01

How to Cite

Mariage, J.-J. (2014). LEARNING TO TEACH TO NEURAL NETWORKS HOW TO LEARN WELL WITH SOH, A SELF-OBSERVING HEURISTIC. International Journal of Computing, 3(1), 58-65. https://doi.org/10.47839/ijc.3.1.254

Issue

Section

Articles