The AI-Powered Content Moderation System is a next-generation, intelligent moderation tool designed to scan and evaluate text, images, and videos before they are uploaded. Using advanced machine learning (ML), natural language processing (NLP), and generative AI, the system ensures that harmful content never goes live, providing a safe and compliant digital environment.
Frontend setup
cd frontend
npm install
npm run devBackend setup
## Inside Root directory
pip install fastapi uvicorn transformers langchain-google-genai
pip install langchain-community langchain-huggingface google-generativeai
pip install pillow python-dotenv chromadb
pip install pydantic google-genai langchain_google_genai
The AI-Powered Content Moderation System operates through a structured pipeline that processes and evaluates text, images, and videos before they are uploaded.
- Users submit content through a web interface.
- Requests are sent to the backend for processing.
- Receives content and forwards it to the LLM (Large Language Model) for analysis.
- Handles communication between the frontend, database, and models.
- The system breaks down content into text, image, and video data.
- Each data type is processed using a specialized LLM:
- 📹 Video LLM (for video moderation)
- 📝 Text LLM (for textual analysis)
- 🖼️ Image LLM (for image evaluation)
- Text Transformation ensures content is refined before further evaluation.
- The processed data is stored in ChromaDB, enabling similarity searches for moderation consistency.
- Guidelines (from YouTube, Instagram, X, etc.) are used to fine-tune moderation rules.
- Web scraping technologies (
Selenium,BeautifulSoup) help keep guidelines up to date.
- The system leverages fine-tuned models for improved accuracy in detecting harmful content.
- Uses similarity search to compare new content with flagged data.
- Generates a structured output based on moderation results.
- Provides insights back to the frontend, allowing users to adjust content before posting.
This modular approach ensures real-time, AI-driven moderation while allowing for continuous improvement and adaptation to evolving content policies. 🚀

