Adversarial ML Red Team Toolkit 🧠 A tool to test ML models against adversarial attacks such as FGSM, PGD, and TextFooler. How to Use python src/cli.py --model model.pkl --data test.csv --attack fgsm --attack pgd