Journal article icon

Journal article

Contrastive fairness in machine learning

Abstract:
Was it fair that Harry was hired but not Barry? Was it fair that Pam was fired instead of Sam? How can one ensure fairness when an intelligent algorithm takes these decisions instead of a human? How can one ensure that the decisions were taken based on merit and not on protected attributes like race or sex? These are the questions that must be answered now that many decisions in real life can be made through machine learning. However, research in fairness of algorithms has focused on the counterfactual questions “what if?” or “why?”, whereas in real life most subjective questions of consequence are contrastive: “why this but not that?”. We introduce concepts and mathematical tools using causal inference to address contrastive fairness in algorithmic decision-making with illustrative examples.
Publication status:
Published
Peer review status:
Peer reviewed

Actions


Access Document


Files:
Publisher copy:
10.1109/locs.2020.3007845

Authors


More by this author
Institution:
University of Oxford
Division:
MPLS
Department:
Engineering Science
Role:
Author
More by this author
Institution:
University of Oxford
Division:
MPLS
Department:
Engineering Science
Role:
Author
ORCID:
0000-0002-3060-3772


Publisher:
Institute of Electrical and Electronics Engineers
Journal:
IEEE Letters of the Computer Society More from this journal
Volume:
3
Issue:
2
Pages:
38-41
Publication date:
2020-07-07
Acceptance date:
2020-07-01
DOI:
EISSN:
2573-9689


Language:
English
Keywords:
Pubs id:
1117312
Local pid:
pubs:1117312
Deposit date:
2020-07-09

Terms of use



Views and Downloads






If you are the owner of this record, you can report an update to it here: Report update to this record

TO TOP