Skip to content

Framework for evaluating the effectiveness of OpenAI's moderation system by processing predefined prompts and analyzing the results.

Notifications You must be signed in to change notification settings

ThaTechMaestro/rev-moderation-api

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Rev Moderation API

Testing OpenAI's content moderation API. This project provides a framework for evaluating the effectiveness of OpenAI's moderation system by processing predefined prompts and analyzing the results.

Features

  • Processes multiple prompts with optional labels
  • Uses OpenAI's latest moderation model
  • Saves results in json format

Setup

  1. Create a virtual environment:
python -m venv venv
source venv/bin/activate
  1. Install dependencies:
pip install -r requirements.txt
  1. Create a .env file with your OpenAI API key:
OPENAI_API_KEY=your_api_key_here

Usage

  1. Add test prompts to test_usage/prompts.txt using the format:
#LABEL: LABEL_NAME
Your prompt text here
---
  1. Run the moderation test:
python test_usage/sandbox.py
  1. View results in test_usage/results/moderation_results.json

Current Project Structure

  • test_usage/: Contains test scripts and results
    • sandbox.py: Main script for running moderation tests
    • prompts.txt: Test prompts with optional labels
    • results/: Directory for storing moderation results

About

Framework for evaluating the effectiveness of OpenAI's moderation system by processing predefined prompts and analyzing the results.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages