English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Developing neural networks with neurons competing for survival

MPS-Authors
/persons/resource/persons192603

Peng,  Z
Research Group Sensorimotor Learning and Decision-Making, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83827

Braun,  DA
Research Group Sensorimotor Learning and Decision-Making, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Research Group Sensorimotor Learning and Decision-making, Max Planck Institute for Intelligent Systems, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Peng, Z., & Braun, D. (2015). Developing neural networks with neurons competing for survival. In 5th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics (IEEE ICDL-EPIROB 2015) (pp. 152-153). Piscataway, NJ, USA: IEEE.


Cite as: https://hdl.handle.net/11858/00-001M-0000-002A-44FE-1
Abstract
We study developmental growth in a feedforward neural network model inspired by the survival principle in nature. Each neuron has to select its incoming connections in a way that allow it to fire, as neurons that are not able to fire over a period of time degenerate and die. In order to survive, neurons have to find reoccurring patterns in the activity of the neurons in the preceding layer, because each neuron requires more than one active input at any one time to have enough activation for firing. The sensory input at the lowest layer therefore provides the maximum amount of activation that all neurons compete for. The whole network grows dynamically over time depending on how many patterns can be found and how many neurons can maintain themselves accordingly. We show in simulations that this naturally leads to abstractions in higher layers that emerge in a unsupervised fashion. When evaluating the network in a supervised learning paradigm, it is clear that our network is not competitive. What is interesting though is that this performance was achieved by neurons that simply struggle for survival and do not know about performance error. In contrast to most studies on neural evolution that rely on a network-wide fitness function, our goal was to show that learning behaviour can appear in a system without being driven by any specific utility function or reward signal.