Skip to content
#

adversarial-evaluation

Here is 1 public repository matching this topic...

RevealVLLMSafetyEval is a comprehensive pipeline for evaluating Vision-Language Models (VLMs) on their compliance with harm-related policies. It automates the creation of adversarial multi-turn datasets and the evaluation of model responses, supporting responsible AI development and red-teaming efforts.

  • Updated May 12, 2025
  • Python

Improve this page

Add a description, image, and links to the adversarial-evaluation topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the adversarial-evaluation topic, visit your repo's landing page and select "manage topics."

Learn more