Testing OpenAI's content moderation API. This project provides a framework for evaluating the effectiveness of OpenAI's moderation system by processing predefined prompts and analyzing the results.
- Processes multiple prompts with optional labels
- Uses OpenAI's latest moderation model
- Saves results in json format
- Create a virtual environment:
python -m venv venv
source venv/bin/activate- Install dependencies:
pip install -r requirements.txt- Create a
.envfile with your OpenAI API key:
OPENAI_API_KEY=your_api_key_here
- Add test prompts to
test_usage/prompts.txtusing the format:
#LABEL: LABEL_NAME
Your prompt text here
---
- Run the moderation test:
python test_usage/sandbox.py- View results in
test_usage/results/moderation_results.json
test_usage/: Contains test scripts and resultssandbox.py: Main script for running moderation testsprompts.txt: Test prompts with optional labelsresults/: Directory for storing moderation results