Project Description: Backdoors refer to a class of Machine Learning (ML) attacks where an adversary trains an ML model to intentionally misclassify any input to a specific label [1]. This is typically achieved by poisoning the training data, such that inputs are misclassified to a target label when the backdoor trigger is present. For instance, … full description “Attacking Large Pre-trained Programming Language Models (PLMs) via Backdoors (ongoing)”