This project designs certifiable defenses against data poisoning and backdoor attacks during training. High-quality, abundant data is crucial for training deep learning models to address complex problems. However, the integrity of this data is threatened by data poisoning attacks, where an attacker can subtly modify the training set to manipulate the model’s predictions.