Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

Developing neural networks with neurons competing for survival

MPG-Autoren
/persons/resource/persons192603

Peng,  Z
Research Group Sensorimotor Learning and Decision-Making, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83827

Braun,  DA
Research Group Sensorimotor Learning and Decision-Making, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Research Group Sensorimotor Learning and Decision-making, Max Planck Institute for Intelligent Systems, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Peng, Z., & Braun, D. (2015). Developing neural networks with neurons competing for survival. In 5th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics (IEEE ICDL-EPIROB 2015) (pp. 152-153). Piscataway, NJ, USA: IEEE.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-002A-44FE-1
Zusammenfassung
We study developmental growth in a feedforward neural network model inspired by the survival principle in nature. Each neuron has to select its incoming connections in a way that allow it to fire, as neurons that are not able to fire over a period of time degenerate and die. In order to survive, neurons have to find reoccurring patterns in the activity of the neurons in the preceding layer, because each neuron requires more than one active input at any one time to have enough activation for firing. The sensory input at the lowest layer therefore provides the maximum amount of activation that all neurons compete for. The whole network grows dynamically over time depending on how many patterns can be found and how many neurons can maintain themselves accordingly. We show in simulations that this naturally leads to abstractions in higher layers that emerge in a unsupervised fashion. When evaluating the network in a supervised learning paradigm, it is clear that our network is not competitive. What is interesting though is that this performance was achieved by neurons that simply struggle for survival and do not know about performance error. In contrast to most studies on neural evolution that rely on a network-wide fitness function, our goal was to show that learning behaviour can appear in a system without being driven by any specific utility function or reward signal.