Analyzing the Evolution of Fairness properties in ML-based Code repositories (available)

Starting Date: Summer 2023
Prerequisites: Good programming skills in a high-level programming language (e.g., Python), experience in Machine Learning (e.g., model training with Python & Sklearn) and experience with CIs (Git and Github APIs)
Will results be assigned to University: No

Project Description:

Machine Learning (ML) systems are vital components of everyday products and services in our society. ML models have been popularly employed in law enforcement (e.g., COMPASS), legal use cases (e.g., LegalBERT), and programming language tasks (e.g., CodeBERT, Github Copilot, etc.). Despite the criticality of these use cases, ML systems are often biased towards certain groups and individuals [5]. Several well-known ML systems deployed by popular software companies (including Google, Amazon and Twitter) and government institutions have exhibited biases against certain individuals or groups [1][2], thus violating the principles of fairness and showing discrimination. For instance, Amazon’s same-day delivery service was biased towards certain groups because its development process did not account for some group demographic information [3]. These biases may be introduced during the evolution of the ML development process. More importantly, they may evolve as the ML development software changes. We posit that as the software artefacts (e.g., training data or training configuration/process) evolve, the fairness properties of the ML model may also evolve.

The goal of this project is to study the evolution of fairness properties in ML-based code repositories. The aim is to examine the level of bias in ML-based software artefacts as they change over time. We plan to achieve this by developing a tool to automatically collect and analyze hundreds of (100+) ML projects (collected from Github). We aim to study the correlation, relationship and dependencies between evolving ML development artefacts and software fairness properties (e.g., individual and group fairness).

Required Skills:

Knowledge of the following:
* Good programming skills in a high-level programming language, e.g., Python * Experience in Machine Learning, model training with Python, Sklearn, etc
* Experience using Git and Github APIs

Deliverables:

(a) A software tool to analyze the evolution of fairness properties in code repositories (b) Experiments involving 100+ ML-based code repositories
(c) A report documenting the implementation and experimental findings from above

Why Should I Apply?:

This project provides opportunities to
* Develop skills in SE/ML research, including SE for ML, ML for SE and software fairness
* Building a new software to automatically analyze fairness properties in ML code repositories * Contribute to cutting-edge SE research leading to potential contribution in top-tier SE venues

Previous Related Works: In previous works, we have proposed a grammar-based approach for fairness testing [4] and conducted a survey of the literature on software fairness [5].

References:

[1]https://www.theguardian.com/technology/2020/sep/21/twitter-apologises-for-racist-image-croppi ng-algorithm

[2] https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

[3] https://www.bloomberg.com/graphics/2016-amazon-same-day/

[4] Soremekun, Ezekiel, Sakshi Udeshi, and Sudipta Chattopadhyay. “Astraea: Grammar-based fairness testing.” IEEE Transactions on Software Engineering 48, no. 12 (2022): 5188-5211.

[5] Soremekun, Ezekiel, Mike Papadakis, Maxime Cordy, and Yves Le Traon. “Software fairness: An analysis and survey.” arXiv preprint arXiv:2205.08809 (2022).