English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Action as an innate bias for visual learning

MPS-Authors
/persons/resource/persons83839

Bülthoff,  HH
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Yuille, A., & Bülthoff, H. (2012). Action as an innate bias for visual learning. Proceedings of the National Academy of Sciences of the United States of America, 109(44), 17736-17737. doi:10.1073/pnas.1215851109.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-B5A2-1
Abstract
Babies are faced at birth with a buzzing blooming confusion of visual stimuli (1). The set of all possible images is truly enormous (2), and simple calculations suggest that only a small fraction of all possible images have ever been seen over the entire history and prehistory of mankind. Moreover, the world consists of an estimated number of 30,000 objects (3), which occur in more than 1,000 different types of scenes. How can an infant start making sense of the visual world? Detailed models of how infants learn to understand images and the balance between nature and nurture are currently lacking. Studies suggest that visual abilities develop in a stereotyped order (4). In particular, infants appear to be able to perceive motion and detect faces at an early stage of development. They can probably exploit the regularities that motion tends to be smooth in space and time, which also enables them to track image patches. Vision researchers have also demonstrated that many vertebrates and insects rely heavily on motion perception for surviving in this complex visual world, e.g., for camouflage breaking or figure ground separation (5, 6), and there are computational models that relate to neural circuitry (7).