Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Buchkapitel

Bits from brains: Analyzing distributed computation in neural systems

MPG-Autoren
/persons/resource/persons173619

Priesemann,  Viola
Department of Nonlinear Dynamics, Max Planck Institute for Dynamics and Self-Organization, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Wibral, M., Lizier, J., & Priesemann, V. (2017). Bits from brains: Analyzing distributed computation in neural systems. In S. I. Walker, P. C. W. Davies, & G. F. R. Ellis (Eds.), From Matter to Life: Information and Causality (pp. 429-467). Cambridge: Cambridge Univ. Pr. doi:10.1017/9781316584200.017.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-002D-C91E-1
Zusammenfassung
Artificial computing systems are a pervasive phenomenon in today's life. While traditionally such systems were employed to support humans in tasks that required mere number-crunching, there is an increasing demand for systems that exhibit autonomous, intelligent behavior in complex environments. These complex environments often confront artificial systems with ill-posed problems that have to be solved under constraints of incomplete knowledge and limited resources. Tasks of this kind are typically solved with ease by biological computing systems, as these cannot afford the luxury to dismiss any problem that happens to cross their path as “ill-posed.” Consequently, biological systems have evolved algorithms to approximately solve such problems – algorithms that are adapted to their limited resources and that just yield “good enough” solutions quickly. Algorithms from biological systems may, therefore, serve as an inspiration for artificial information processing systems to solve similar problems under tight constraints of computational power, data availability, and time. One naive way to use this inspiration is to copy and incorporate as much detail as possible from the biological into the artificial system, in the hope to also copy the emergent information processing. However, already small errors in copying the parameters of a system may compromise success. Therefore, it may be useful to derive inspiration also in a more abstract way, that is directly linked to the information processing carried out by a biological system. But how can can we gain insight into this information processing without caring for its biological implementation? The formal language to quantitatively describe and dissect information processing – in any system – is provided by information theory. For our particular question we can exploit the fact that information theory does not care about the nature of variables that enter the computation or information processing. Thus, it is in principle possible to treat all relevant aspects of biological computation, and of biologically inspired computing systems, in one natural framework. In Wibral et al. (2015) we systematically presented how to analyze biological computing systems, especially neural systems, using methods from information theory and discussed how these information-theoretic results can inspire the design of artificial computing systems. Specifically, we focused on three types of approaches to characterizing the information processing undertaken in such systems and on what this tells us about the algorithms they implement.