S-Eval: Automatic and Adaptive Test Generation for Benchmarking Safety Evaluation of Large Language Models
-
Updated
Apr 19, 2025
S-Eval: Automatic and Adaptive Test Generation for Benchmarking Safety Evaluation of Large Language Models
Benchmark evaluation code for "SORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal" (ICLR 2025)
Add a description, image, and links to the safety-evaluation topic page so that developers can more easily learn about it.
To associate your repository with the safety-evaluation topic, visit your repo's landing page and select "manage topics."