An open-source Python library for data cleaning tasks. It includes functions for profanity detection, and removal, and detection and removal of personal information. Also includes hate speech and offensive language detection and removal, using AI.
Important
If you are using scikit-learn
versions older than version 1.3.0
, please also downgrade your version of numpy
as stated below. Otherwise, you can continue to use your preferred version of scikit-learn
without downgrading numpy
.
Please downgrade to numpy
version 1.26.4
. Our ValX DecisionTreeClassifier AI model, relies on lower versions of numpy
, because it was trained on these versions.
For more information see: https://techoverflow.net/2024/07/23/how-to-fix-numpy-dtype-size-changed-may-indicate-binary-incompatibility-expected-96-from-c-header-got-88-from-pyobject/
Note
ValX will automatically install a version of scikit-learn
that is compatible with your device if you don't have one already.
ValX v0.2.5 introduces enhanced flexibility for profanity filtering by adding support for custom profanity lists:
- Custom Profanity Word Lists: Users can now provide their own lists of profane words directly as Python lists to the
detect_profanity
andremove_profanity
functions via the newcustom_words_list
parameter. - Standalone Custom Lists: Utilize your custom profanity list exclusively by setting the
language
parameter toNone
. ValX will then only use the words provided incustom_words_list
. - Combined Lists: Use a custom list in conjunction with ValX's built-in language-specific wordlists. Simply provide both a
language
(e.g., "English") and yourcustom_words_list
. ValX will use the combined set of words. - Loading Custom Lists from File: A new helper function,
load_custom_profanity_from_file(filepath)
, allows you to easily load custom profanity words from a text file.- File Format: The file should contain one profanity word per line.
- Lines starting with a hash symbol (
#
) are treated as comments and ignored. - Empty lines or lines containing only whitespace are also ignored.
- Updated Detection Reporting: The
detect_profanity
function's output now specifies the source of detected profanity more clearly (e.g., "Custom", "Custom + English").
These features give users greater control over the profanity filtering process, allowing for more tailored and specific use cases.
Fixed a major incompatibility issue with scikit-learn
due to version changes in scikit-learn v1.3.0
which causes compatibility issues with versions later than 1.2.2
. ValX can now be used with scikit-learn
versions earlier and later than 1.3.0
!
We've also removed scikit-learn==1.2.2
as a dependency, as most versions of scikit-learn
will now work.
We have introduced a new optional info_type
parameter into our detect_sensitive_information
, and remove_sensitive_information
functions, to allow you to have fine-grained control over what sensitive information you want to detect or remove.
Also introduced more detection patterns for other types of sensitive information, including:
"iban"
: International Bank Account Number."mrn"
: Medical Record Number (may not work correctly, depending on provider and country)."icd10"
: International Classification of Diseases, Tenth Revision."geo_coords"
: Geo-coordinates (latitude and longitude in decimal degrees format)."username"
: Username handles (@username)."file_path"
: File paths (general patterns for both Windows and Unix paths)."bitcoin_wallet"
: Cryptocurrency wallet address."ethereum_wallet"
: Cryptocurrency wallet addresses.
We have refactored and changed the detect_profanity
function:
- Removed unnecessary printing
- Now returns more information about each found profanity, including
Line
,Column
,Word
, andLanguage
.
Note
You can view ValX's package documentation for more information on changes.
Using the AI models in ValX, you can now automatically remove hate speech, or offensive speech from your text data, without needing to run detection and write your own custom implementation method.
You can install ValX using pip:
pip install valx
ValX supports the following Python versions:
- Python 3.6
- Python 3.7
- Python 3.8
- Python 3.9
- Python 3.10
- Python 3.11/Later (Preferred)
Please ensure that you have one of these Python versions installed before using ValX. ValX may not work as expected on lower versions of Python than the supported.
- Profanity Detection: Detect profane and NSFW words or terms.
- Remove Profanity: Remove profane and NSFW words or terms.
- Detect Sensitive Information: Detect sensitive information in text data.
- Remove Sensitive Information: Remove sensitive information from text data.
- Detect Hate Speech: Detect hate speech or offensive speech in text, using AI.
- Remove Hate Speech: Remove hate speech or offensive speech in text, using AI.
Below is a complete list of all the available supported languages for ValX's profanity detection and removal functions which are valid values for language
:
- All
- Arabic
- Czech
- Danish
- German
- English
- Esperanto
- Persian
- Finnish
- Filipino
- French
- French (CA)
- Hindi
- Hungarian
- Italian
- Japanese
- Kabyle
- Korean
- Dutch
- Norwegian
- Polish
- Portuguese
- Russian
- Swedish
- Thai
- Klingon
- Turkish
- Chinese
ValX allows for flexible profanity filtering using built-in language lists, custom word lists (provided as Python lists or loaded from files), or a combination of both.
1. Basic Profanity Detection (Built-in Language)
from valx import detect_profanity
sample_text = ["This is some fuck and porn text."]
# Detect profanity using the English list
results = detect_profanity(sample_text, language='English')
# results will be:
# [
# {'Line': 1, 'Column': 14, 'Word': 'fuck', 'Language': 'English'},
# {'Line': 1, 'Column': 23, 'Word': 'porn', 'Language': 'English'}
# ]
print(results)
2. Profanity Detection with a Custom Word List (Python List)
You can provide your own list of words to filter.
from valx import detect_profanity
sample_text = ["This contains custombadword1 and also asshole from English list."]
my_custom_words = ["custombadword1", "anothercustom"]
# Option A: Custom list ONLY (language=None)
results_custom_only = detect_profanity(sample_text, language=None, custom_words_list=my_custom_words)
# results_custom_only will detect "custombadword1" with Language: "Custom"
# [{'Line': 1, 'Column': 15, 'Word': 'custombadword1', 'Language': 'Custom'}]
print(results_custom_only)
# Option B: Custom list COMBINED with a built-in language
results_custom_plus_english = detect_profanity(sample_text, language="English", custom_words_list=my_custom_words)
# results_custom_plus_english will detect "custombadword1" and "asshole"
# Language will be "Custom + English"
# [
# {'Line': 1, 'Column': 15, 'Word': 'custombadword1', 'Language': 'Custom + English'},
# {'Line': 1, 'Column': 43, 'Word': 'asshole', 'Language': 'Custom + English'}
# ]
print(results_custom_plus_english)
3. Loading Custom Profanity Words from a File
ValX provides a helper function to load words from a text file (one word per line, '#' for comments).
from valx import detect_profanity, load_custom_profanity_from_file
# Assume 'my_profanity_file.txt' contains:
# customfileword1
# # this is a comment
# customfileword2
custom_words_from_file = load_custom_profanity_from_file("my_profanity_file.txt")
# custom_words_from_file will be: ['customfileword1', 'customfileword2']
sample_text_for_file = ["Text with customfileword1 and built-in shit."]
# Use file-loaded list with English built-in list
results_file_plus_english = detect_profanity(
sample_text_for_file,
language="English",
custom_words_list=custom_words_from_file
)
# Detects "customfileword1" and "shit", Language: "Custom + English"
print(results_file_plus_english)
# Use file-loaded list ONLY
results_file_only = detect_profanity(
sample_text_for_file,
language=None, # Important: set language to None
custom_words_list=custom_words_from_file
)
# Detects only "customfileword1", Language: "Custom"
print(results_file_only)
Output Format for detect_profanity
The detect_profanity
function returns a list of dictionaries. Each dictionary includes:
"Line"
: The line number (1-indexed)."Column"
: The column number (1-indexed) where the profanity starts."Word"
: The detected profanity word."Language"
: Indicates the source of the word list:<LanguageName>
(e.g., "English"): If only a built-in language list was used."Custom"
: Iflanguage=None
and only acustom_words_list
was used."Custom + <LanguageName>"
(e.g., "Custom + English"): If both a built-in list andcustom_words_list
were used."Custom + All"
: Iflanguage='All'
andcustom_words_list
were used.
4. Removing Profanity
remove_profanity
works similarly, accepting language
and custom_words_list
parameters.
from valx import remove_profanity, load_custom_profanity_from_file
sample_text = ["This is fuck, custombadword1, and text with customfileword1."]
my_custom_words = ["custombadword1"]
custom_words_from_file = load_custom_profanity_from_file("my_profanity_file.txt") # Assuming it contains 'customfileword1'
# Remove profanity using English built-in + my_custom_words + custom_words_from_file
all_custom_words = list(set(my_custom_words + custom_words_from_file)) # Combine and unique
cleaned_text = remove_profanity(
sample_text,
output_file="cleaned_output.txt", # Optional: saves to file
language="English",
custom_words_list=all_custom_words
)
# cleaned_text will have "fuck", "custombadword1", and "customfileword1" replaced with "bad word".
# e.g., ["This is bad word, bad word, and text with bad word."]
print(cleaned_text)
The load_profanity_words
function (used internally) also accepts language
and custom_words_list
if you need direct access to the word lists.
from valx import detect_sensitive_information
# Detect sensitive information
detected_sensitive_info = detect_sensitive_information(sample_text)
Note
We have updated this function, and it now includes an optional argument for info_type
, which can be used to detect only specific types of sensitive information. It was also added to remove_sensitive_information
.
from valx import remove_sensitive_information
# Remove sensitive information
cleaned_text = remove_sensitive_information(sample_text2)
from valx import detect_hate_speech
# Detect hate speech or offensive language
outcome_of_detection = detect_hate_speech("You are stupid.")
Important
The model's possible outputs are:
['Hate Speech']
: The text was flagged and contained hate speech.['Offensive Speech']
: The text was flagged and contained offensive speech.['No Hate and Offensive Speech']
: The text was not flagged for any hate speech or offensive speech.
Note
See our official documentation for more examples on how to use ValX.
Contributions are welcome! If you encounter any issues, have suggestions, or want to contribute to ValX, please open an issue or submit a pull request on GitHub.
ValX is released under the terms of the MIT License (Modified). Please see the LICENSE file for the full text.
ValX uses data from this GitHub repository: https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words/ © 2012-2020 Shutterstock, Inc.
Creative Commons Attribution 4.0 International License: https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words/blob/master/LICENSE
Modified License Clause
The modified license clause grants users the permission to make derivative works based on the ValX software. However, it requires any substantial changes to the software to be clearly distinguished from the original work and distributed under a different name.