English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment

MPS-Authors
/persons/resource/persons145105

Zafar,  Muhammad Bilal
Group K. Gummadi, Max Planck Institute for Software Systems, Max Planck Society;

/persons/resource/persons144952

Valera,  Isabel
Group M. Gomez Rodriguez, Max Planck Institute for Software Systems, Max Planck Society;

/persons/resource/persons75510

Gomez Rodriguez,  Manuel
Group M. Gomez Rodriguez, Max Planck Institute for Software Systems, Max Planck Society;

/persons/resource/persons144524

Gummadi,  Krishna P.
Group K. Gummadi, Max Planck Institute for Software Systems, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:1610.08452.pdf
(Preprint), 644KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Zafar, M. B., Valera, I., Gomez Rodriguez, M., & Gummadi, K. P. (2016). Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment. Fairness, Accountability, and Transparency in Machine Learning. doi:10.1145/3038912.3052660.


Cite as: https://hdl.handle.net/11858/00-001M-0000-002C-F674-8
Abstract
Automated data-driven decision making systems are increasingly being used to assist, or even replace humans in many settings. These systems function by learning from historical decisions, often taken by humans. In order to maximize the utility of these systems (or, classifiers), their training involves minimizing the errors (or, misclassifications) over the given historical data. However, it is quite possible that the optimally trained classifier makes decisions for people belonging to different social groups with different misclassification rates (e.g., misclassification rates for females are higher than for males), thereby placing these groups at an unfair disadvantage. To account for and avoid such unfairness, in this paper, we introduce a new notion of unfairness, disparate mistreatment, which is defined in terms of misclassification rates. We then propose intuitive measures of disparate mistreatment for decision boundary-based classifiers, which can be easily incorporated into their formulation as convex-concave constraints. Experiments on synthetic as well as real world datasets show that our methodology is effective at avoiding disparate mistreatment, often at a small cost in terms of accuracy.