Azure AI Content Safety is an AI service that detects harmful user-generated and AI-generated content in applications and services. Azure AI Content Safety includes text and image APIs that allow you to detect material that is harmful. The interactive Content Safety Studio allows you to view, explore, and try out sample code for detecting harmful content across different modalities.
This is an example of Azure safety content using python for text analysis.
In this project, I will make some interesting test:
- Test how to analyse text using azure.
- Look for all the posible categories for filtering text content.
- Test all categories for content analysis on text.
- Categories: Violence, Self-harm, Sexual, and Hate
- Azure content safety
- Python
- Azure SDK