A Debugging User Study: Validating Debugging Research Findings (available)

Starting Date: Summer 2023
Prerequisites: Good programming skills in a high-level programming language, preferably C/C++ and Python, and experience in conducting a user study, using google forms and performing data analysis
Will results be assigned to University: No

Project Description:

Understanding and fixing software faults is a challenging task for developers [1]. To address this challenge, researchers have designed several debuggers. For instance, automated fault localisation (AFL) techniques (e.g., Ochiai [1]) and automated program repair (APR) tools (e.g., GenProg[2]) are designed to support developers during software debugging tasks. In addition, researchers have gathered empirical evidence concerning how developers debug and repair software faults. However, several techniques and results are impractical in software practice [3,4]. This is because most of these debugging techniques and empirical results are only validated in the lab, without actual software developers.

This project aims to validate the empirical results of several APR and AFL techniques with actual software developers. The aim is to conduct empirical studies (e.g., online surveys and debugging sessions) to study the validity of several empirical results and findings. We plan to achieve this by designing and conducting an empirical study to validate debugging techniques in practice (e.g., with real bugs, actual developers and real development environments).

Required Skills:

Knowledge of the following:
* Good programming skills in a high-level programming language, preferably C/C++ and Python

* Experience in conducting a user study, using google forms and data analysis

Deliverables:

(a) A user study involving (100+) developers answering questions about debugging C programs

(b) A report on the user study design and the analysis of the developer’s responses

Why Should I Apply?:

This project provides opportunities to
* Develop skills in SE research, including automated debugging, software testing and conducting user studies
* Contribute to cutting-edge research leading to potential contribution in top-tier SE venues

Previous Related Works: In previous works, we have conducted debugging user studies to study how developers debug [3] and examine experimental factors in debugging evaluation [4].

References:
[1] Wong, W. Eric, Ruizhi Gao, Yihao Li, Rui Abreu, and Franz Wotawa. “A survey on software fault localization.” IEEE Transactions on Software Engineering 42, no. 8 (2016): 707-740.

[2] Le Goues, Claire, ThanhVu Nguyen, Stephanie Forrest, and Westley Weimer. “Genprog: A generic method for automatic software repair.” Ieee transactions on software engineering 38, no. 1 (2011): 54-72.

[3] Böhme, Marcel, Ezekiel O. Soremekun, Sudipta Chattopadhyay, Emamurho Ugherughe, and Andreas Zeller. “Where is the bug and how is it fixed? an experiment with practitioners.” In Proceedings of the 2017 11th joint meeting on foundations of software engineering, pp. 117-128. 2017.

[4] Soremekun, Ezekiel, Lukas Kirschner, Marcel Böhme, and Mike Papadakis. “Evaluating the Impact of Experimental Assumptions in Automated Fault Localization.” In Proceedings of the ACM/IEEE 45th International Conference on Software Engineering. 2023