English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Texture synthesis and the controlled generation of natural stimuli using convolutional neural networks

MPS-Authors
/persons/resource/persons83896

Ecker,  AS
Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83805

Bethge,  M
Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Gatys, L., Ecker, A., & Bethge, M. (2015). Texture synthesis and the controlled generation of natural stimuli using convolutional neural networks. Poster presented at Bernstein Conference 2015, Heidelberg, Germany.


Cite as: https://hdl.handle.net/11858/00-001M-0000-002A-4484-4
Abstract
It is a long standing question how biological systems transform visual inputs to robustly infer high level visual information. Research in the last decades has established that much of the underlying computations take place in a hierarchical fashion along the ventral visual pathway. However, the exact processing stages along this hierarchy are difficult to characterise. Here we present a method to generate stimuli that will allow a principled description of the processing stages along the ventral stream. We introduce a new parametric texture model based on the powerful feature spaces of convolutional neural networks optimised for object recognition. We show that constraining spatial summary statistic on feature maps suffices to synthesise high quality natural textures. Moreover we establish that our texture representations continuously disentangle high level visual information and demonstrate that the hierarchical parameterisation of the texture model naturally enables us to generate novel types of stimuli for systematically probing mid-level vision.