The Aim of this project is to utilize customer history and personal details to build a classification model that can accurately predict which customers are likely to leave the company.
- Pandas: for data manipulation and analysis
- Numpy: for numerical computing and array manipulation
- Plotly: for interactive data visualization
- Matplotlib: for static data visualization
- Seaborn: for statistical data visualization
- Scikit-learn (sklearn): for machine learning modeling and evaluation
- Open datasets: for accessing publicly available datasets for training and testing the model.
I imported the telecom churn dataset using the open datasets module. Upon importing the data, I performed an initial exploration to gain insights into the data. This involved checking the information and statistics of the data, such as the number of rows and columns, the data types of the columns, and the presence of missing values. This initial exploration helped me get a better understanding of the data and enabled me to plan the next steps of the project, such as data cleaning, feature engineering, and modeling.
Data visualization was performed to gain insights into the relationships between various factors and customer churn. Key factors affecting churn rate were identified through the analysis of the behavior of churn and non-churn customers. The insights gained from the visualizations aided in determining the most important features for building the predictive model and effectively communicating the results to stakeholders.
Feature selection was performed based on the behavioral analysis and the collinearity between the features and the target variable (churn). Irrelevant columns were dropped and the most relevant columns were selected to build the predictive model. The selected features were then used to train the model and improve its accuracy. By considering the relationships between the features and the target variable, the model was made more effective and better results were achieved.
The dataset was divided into three parts: training data, test data, and validation data. Different classification algorithms, including logistic regression, random forest, decision tree, and gradient boost, were applied to the validation data. The accuracy and classification report were evaluated for each model, and the best performing model was selected for further analysis. The results of the selected model were then compared to the actual values of the churn, and a confusion matrix was plotted to visualize the predictions. This process allowed me to compare the performance of different algorithms and determine which one was the best fit for the data.