- Abstract:
-
Machine learning can impact people with legal or ethical consequences when it is used to automate decisions in areas such as insurance, lending, hiring, and predictive policing. In many of these scenarios, previous decisions have been made that are unfairly biased against certain subpopulations, for example those of a particular race, gender, or sexual orientation. Since this past data may be biased, machine learning predictors must account for this to avoid perpetuating or creating discrimin...
Expand abstract - Publication status:
- Published
- Peer review status:
- Reviewed (other)
- Version:
- Publisher's Version
- Publisher:
- Massachusetts Institute of Technology Press Publisher's website
- Volume:
- 30
- Pages:
- 4067-4077
- Publication date:
- 2017
- Acceptance date:
- 2017-12-09
- ISSN:
-
1049-5258
- Pubs id:
-
pubs:924094
- URN:
-
uri:7f6b6d7f-83f4-4d38-9991-ec15ea7c3957
- UUID:
-
uuid:7f6b6d7f-83f4-4d38-9991-ec15ea7c3957
- Local pid:
- pubs:924094
- Copyright holder:
- Massachusetts Institute of Technology Press
- Copyright date:
- 2017
- Notes:
- This is a conference paper which was presented at the 31st Conference on Neural Information Processing Systems, 04-09 December 2017, Long Beach, CA, USA. This is the final version of the article which is also available online from Massachusetts Institute of Technology Press at: https://papers.nips.cc/paper/6995-counterfactual-fairness
Conference item
Counterfactual fairness
Actions
Authors
Funding
Bibliographic Details
Terms of use
Metrics
Altmetrics
Dimensions
If you are the owner of this record, you can report an update to it here: Report update to this record