A supervised learning based tool to identify toxic code review comments
-
Updated
Oct 22, 2024 - Python
A supervised learning based tool to identify toxic code review comments
This is a simple python program which uses a machine learning model to detect toxicity in tweets, GUI in Tkinter.
Simple Multi-Language HTTP Server Text Toxicity Detector
This work focuses on the development of machine learning models, in particular neural networks and SVM, where they can detect toxicity in comments. The topics we will be dealing with: a) Cost-sensitive learning, b) Class imbalance
This repository features an LLM-based moderation system designed for game audio and text chats. By implementing toxicity moderation, it enhances the online interaction experience for gamers, improving player retention by minimizing adverse negative experiences in games such as Valorant and Overwatch. Ultimately reducing manual moderation costs.
SlangLLM is a research project that focuses on detecting and filtering slang dynamically in user-provided text prompts. Presented at IEEE SATC 2025. Accepted for publication in IEEE Xplore.
An Explainable Toxicity detector for code review comments. Published in ESEM'2023
Detecting Toxic comments using machine learning
An AI-powered content moderation system using Python and Hugging Face Transformers. Combines rule-based filtering and machine learning to detect and block toxic, profane, and politically sensitive content, built for developers and communities to create safer, positive online spaces.
A keyword-based abuse/hate detection software.
In-Game Toxic Language Detection: Shared Task and Attention Residuals
A multilingual text analysis system that performs sentiment analysis and toxicity detection with detoxified text generation.
Add a description, image, and links to the toxicity-detection topic page so that developers can more easily learn about it.
To associate your repository with the toxicity-detection topic, visit your repo's landing page and select "manage topics."