English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

CityPersons: A Diverse Dataset for Pedestrian Detection

MPS-Authors
/persons/resource/persons134279

Zhang,  Shanshan
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

/persons/resource/persons79212

Benenson,  Rodrigo
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

/persons/resource/persons45383

Schiele,  Bernt
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Zhang, S., Benenson, R., & Schiele, B. (2017). CityPersons: A Diverse Dataset for Pedestrian Detection. In 30th IEEE Conference on Computer Vision and Pattern Recognition (pp. 4457-4465). Piscataway, NJ: IEEE. doi:10.1109/CVPR.2017.474.


Cite as: https://hdl.handle.net/11858/00-001M-0000-002D-7CA8-E
Abstract
Convnets have enabled significant progress in pedestrian detection recently, but there are still open questions regarding suitable architectures and training data. We revisit CNN design and point out key adaptations, enabling plain FasterRCNN to obtain state-of-the-art results on the Caltech dataset. To achieve further improvement from more and better data, we introduce CityPersons, a new set of person annotations on top of the Cityscapes dataset. The diversity of CityPersons allows us for the first time to train one single CNN model that generalizes well over multiple benchmarks. Moreover, with additional training with CityPersons, we obtain top results using FasterRCNN on Caltech, improving especially for more difficult cases (heavy occlusion and small scale) and providing higher localization quality.