The evergrowing reliance of humans and society on machine learning methods has raised concerns about their trustworthiness and liability. As a response to these concerns, Socially Responsible Machine Learning (SRML) aims at developing fair, transparent, and robust machine learning algorithms. However, traditional approaches to SRML do not incorporate human perspectives, and therefore are not sufficient to build long-lasting trust between machines and human being. Causality as the key to human intelligence plays a vital role in achieving socially responsible machine learning algorithms which are compatible with human notions. Bridging the gap between traditional SRML and causality, in this tutorial, we aim at providing a holistic overview of SRML through the lens of causality. In particular, we will focus on state-of-the-art techniques on causal socially responsible ML in terms of fairness, interpretability, and robustness. The objectives of this tutorial are as follows: (1) we provide a taxonomy of existing literature on causal socially responsible ML from fairness, interpretability, and robustness perspective; (2) we review the state-of-the-art techniques for each task; and (3) we elucidate open questions and future research directions. We believe this tutorial is beneficial to researchers and practitioners from the areas of data mining, machine learning, and social sciences.
Lu ChengAhmadreza MosallanezhadParas ShethHuan Liu
Arman HassanniakalagerPhilip Newall