Journal article
Contrastive fairness in machine learning
- Abstract:
- Was it fair that Harry was hired but not Barry? Was it fair that Pam was fired instead of Sam? How can one ensure fairness when an intelligent algorithm takes these decisions instead of a human? How can one ensure that the decisions were taken based on merit and not on protected attributes like race or sex? These are the questions that must be answered now that many decisions in real life can be made through machine learning. However, research in fairness of algorithms has focused on the counterfactual questions “what if?” or “why?”, whereas in real life most subjective questions of consequence are contrastive: “why this but not that?”. We introduce concepts and mathematical tools using causal inference to address contrastive fairness in algorithmic decision-making with illustrative examples.
- Publication status:
- Published
- Peer review status:
- Peer reviewed
Actions
Access Document
- Files:
-
-
(Preview, Accepted manuscript, 337.4KB, Terms of use)
-
- Publisher copy:
- 10.1109/locs.2020.3007845
Authors
- Publisher:
- Institute of Electrical and Electronics Engineers
- Journal:
- IEEE Letters of the Computer Society More from this journal
- Volume:
- 3
- Issue:
- 2
- Pages:
- 38-41
- Publication date:
- 2020-07-07
- Acceptance date:
- 2020-07-01
- DOI:
- EISSN:
-
2573-9689
- Language:
-
English
- Keywords:
- Pubs id:
-
1117312
- Local pid:
-
pubs:1117312
- Deposit date:
-
2020-07-09
Terms of use
- Copyright holder:
- Institute of Electrical and Electronics Engineers
- Copyright date:
- 2020
- Rights statement:
- © 2020 IEEE.
- Notes:
- This is the accepted manuscript version of the article. The final version is available online from Institute of Electrical and Electronics Engineers at: https://doi.org/10.1109/LOCS.2020.3007845
If you are the owner of this record, you can report an update to it here: Report update to this record