Fairness in Network Representation by Latent Structural Heterogeneity in Observational Data


  • Xin Du TU Eindhoven
  • Yulong Pei TU Eindhoven
  • Wouter Duivesteijn TU Eindhoven
  • Mykola Pechenizkiy TU Eindhoven




While recent advances in machine learning put many focuses on fairness of algorithmic decision making, topics about fairness of representation, especially fairness of network representation, are still underexplored. Network representation learning learns a function mapping nodes to low-dimensional vectors. Structural properties, e.g. communities and roles, are preserved in the latent embedding space. In this paper, we argue that latent structural heterogeneity in the observational data could bias the classical network representation model. The unknown heterogeneous distribution across subgroups raises new challenges for fairness in machine learning. Pre-defined groups with sensitive attributes cannot properly tackle the potential unfairness of network representation. We propose a method which can automatically discover subgroups which are unfairly treated by the network representation model. The fairness measure we propose can evaluate complex targets with multi-degree interactions. We conduct randomly controlled experiments on synthetic datasets and verify our methods on real-world datasets. Both quantitative and quantitative results show that our method is effective to recover the fairness of network representations. Our research draws insight on how structural heterogeneity across subgroups restricted by attributes would affect the fairness of network representation learning.




How to Cite

Du, X., Pei, Y., Duivesteijn, W., & Pechenizkiy, M. (2020). Fairness in Network Representation by Latent Structural Heterogeneity in Observational Data. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 3809-3816. https://doi.org/10.1609/aaai.v34i04.5792



AAAI Technical Track: Machine Learning