Feel free to contact me: gtzjh86@outlook.com
4/2/2025: Support for LabelEncoder
, TargetEncoder
, and FrequencyEncoder
is under developing.
Interpretable machine learning has gained significant prominence across various fields. Machine learning models are valued for their robust capability to capture complex relationships within data through sophisticated fitting algorithms. Complementing these models, interpretability frameworks provide essential tools for revealing such "black-box" models. These interpretable approaches deliver critical insights by ranking feature importance, identifying nonlinear response thresholds, and analyzing interaction relationships between factors.
Project mymodels
, is targeting on building a tiny, user-friendly, and efficient workflow, for the scientific researchers and students who are seeking to implement interpretable machine learning in their their research works.
-
Python Proficiency
DO REMEMBER: Make a practical demo project after you finish the above learning to enhance what you have learned (i.e., a tiny web crawler). Here is one of my practice projects
-
Machine Learning Fundamentals
- Stanford CS229 provides essential theory.
-
Technical Skills
- Environment management with conda/pip
- Terminal/Command Line proficiency
- Version control with Git (My note about Git)
The above recommended tutorials are selected based solely on personal experience.
Supported platforms:
- Windows (X86) - Tested on Windows 10/11
- Linux (X86) - Tested on WSL2.0 (Ubuntu)
- macOS (ARM) - Tested on Apple Silicon (M1)
Requirements:
- Python 3.10.X
Create environment
conda env create -f requirement.yml -n mymodels -y
Activate
conda activate mymodels
Try the Titanic demo first
-
Binary classification: run_titanic.ipynb
Dataset source: Titanic: Machine Learning from Disaster
And then try other demos
-
Multi-class classification: run_obesity.ipynb
Dataset source: Obesity Risk Dataset
-
Regression task: run_housing.ipynb
Dataset source: Kaggle Housing Data