English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Whitening Black-Box Neural Networks

MPS-Authors
/persons/resource/persons134225

Oh,  Seong Joon
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

/persons/resource/persons214875

Augustin,  Max
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

/persons/resource/persons45383

Schiele,  Bernt       
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

/persons/resource/persons44451

Fritz,  Mario
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:1711.01768.pdf
(Preprint), 2MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Oh, S. J., Augustin, M., Schiele, B., & Fritz, M. (2017). Whitening Black-Box Neural Networks. Retrieved from http://arxiv.org/abs/1711.01768.


Cite as: https://hdl.handle.net/21.11116/0000-0000-434A-2
Abstract
Many deployed learned models are black boxes: given input, returns output. Internal information about the model, such as the architecture, optimisation procedure, or training data, is not disclosed explicitly as it might contain proprietary information or make the system more vulnerable. This work shows that such attributes of neural networks can be exposed from a sequence of queries. This has multiple implications. On the one hand, our work exposes the vulnerability of black-box neural networks to different types of attacks -- we show that the revealed internal information helps generate more effective adversarial examples against the black box model. On the other hand, this technique can be used for better protection of private content from automatic recognition models using adversarial examples. Our paper suggests that it is actually hard to draw a line between white box and black box models.