This repository provides a reusable framework for solving pricing optimization problems where the goal is to balance the twin objectives of winning deals (conversion probability) by offering discounts and maximizing profitability. It demonstrates a complete workflow including:
- Feature engineering at both item and quote levels (numeric, categorical, and temporal features)
- Predictive modeling to estimate quote conversion likelihood.
- Optimization strategies (brute force, Bayesian optimization, and multi-objective genetic algorithms) to recommend markups that maximize expected profit while maintaining strong win rates.
Sales team generates quotes that often contain multiple products (quote items). Each item has a base cost, and sales representatives have the discretion to apply a markup to determine the final selling price. This flexibility is a double-edged sword:
● High Markups: Increase profit margin per item but decrease the likelihood of the customer accepting the quote (lower conversion rate).
● Low Markups: Increase the chance of winning the deal but erode overall profitability.
Column Name | Data Type | Description |
---|---|---|
quote_id | String | Unique identifier for a sales quote |
item_id | String | Unique identifier for an item within a quote |
supplier_unit_price | Float | The base supplier unit price (cost) |
quantity | Integer | Quantity of this item |
product_category | String | Category of the product (e.g., Compute, Storage, Networking) |
quote_type | String | Type of quote (e.g., normal, QTO) |
region | String | Geographic region of the customer |
markup | Float | Markup percentage applied (e.g., 0.2 for 20%) |
quote_converted | Integer | Target variable: 1 if quote accepted, 0 if lost |
quote_publish_date | Timestamp | When the quote was published |
-
quote_features.py
Python module for feature engineering and processing quote data. -
quote_feature_builder.pkl
Serialized Python object (pickle) for the quote feature builder. -
calibrated_model.pkl
Pickled calibrated machine learning model. -
data1_imputed.pkl
Input dataset with imputed values. -
ConversionModel.ipynb
Jupyter notebook for developing and analyzing the conversion model. -
RecommendationModel.ipynb
Jupyter notebook for building and evaluating the markup recommendation model. -
environment.yml
Conda environment file listing dependencies required to run the notebooks and scripts.
-
Clone the repository:
git clone <repo-url> cd <repo-directory>
-
Set up the environment:
conda env create -f environment.yml conda activate <environment-name>
-
Run the notebooks:
OpenConversionModel.ipynb
orRecommendationModel.ipynb
in JupyterLab or Jupyter Notebook.
- Use
quote_features.py
to process and engineer features from quote data. - Load the provided
.pkl
files for pre-trained models and datasets. - Refer to the notebooks for model training, evaluation, and analysis workflows.
- Python (version specified in
environment.yml
) - Conda (for environment management)
- Jupyter Notebook/Lab
MIT license.
For questions or contributions, please open an issue or pull request.