Lightweight toolkit for quick-look petrophysical estimation and exploration.
This repository contains the Python backend and SvelteKit frontend used by the quick_pp application, plus utilities for running ML training and simple petrophysical workflows.
Goals of this README
- Give developers and users the minimal, practical steps to get the app running locally (backend, frontend and optional Docker services).
Project components
- Backend: FastAPI application, data services, model endpoints and plotting APIs (in
quick_pp/app/backend). - Frontend: SvelteKit UI (in
quick_pp/app/frontend) providing data visualisations and tools. - Docker: Compose assets to run backend + Postgres for development (
quick_pp/app/docker). - CLI:
quick_ppCLI wrapper that starts services, runs training, prediction and deployment tasks (quick_pp/cli.py). - Machine learning: training/prediction pipelines and MLflow integration (
quick_pp/machine_learning).
Prerequisites
- Python 3.11+ (for backend and CLI)
- Node.js 18+ and npm or yarn (for frontend)
- Docker & Docker Compose (optional, for the packaged backend + DB)
- uv (optional, fast Python package installer from https://github.com/astral-sh/uv)
.env & Database (SQLite vs PostgreSQL)
-
The application reads DB and other secrets from environment variables. For local development create a
.envfile in the repo root or usequick_pp/app/docker/.envwhen running the bundled Docker Compose stack. -
Minimal
.envexamplesSQLite (quick local testing)
QPP_DATABASE_URL=sqlite:///./data/local.db QPP_SECRET_KEY=change-this-to-a-random-string
PostgreSQL (recommended for realistic usage / Docker)
QPP_DATABASE_URL=postgresql://qpp_user:qpp_pass@postgres:5432/quick_pp QPP_SECRET_KEY=replace-with-secure-value # if you run DB externally, replace host with reachable hostname or IPWhich to choose
- SQLite: easiest for quick, single-user experiments. No external DB server required but limited in concurrency and not recommended for multi-container deployments.
- PostgreSQL: recommended for Docker and production-like setups; the
quick_pp/app/docker/docker-compose.yamlin the repo is configured to create a Postgres service and a matching.envtemplate.
Security note
- Never commit secrets (
QPP_SECRET_KEY, DB passwords) to version control. Use environment-specific.envfiles excluded via.gitignoreor a secrets manager.
Quick checklist
- Ports: backend API
6312, frontend dev5173, frontend prod5469, MLflow UI5015, model server5555. - Backend CLI entrypoint:
python main.py(orquick_ppif installed).
Clone & Python setup
- Clone the repo and create a venv:
git clone https://github.com/imranfadhil/quick_pp.git
cd quick_pp
uv venv --python 3.11
# mac/linux
source .venv/bin/activate
# windows (cmd.exe)
.venv\Scripts\activate- Install Python dependencies:
uv pip install -r requirements.txt
# (optional) install package editable for CLI convenience
uv pip install -e .Using Docker (recommended for a complete local stack)
- The repo provides Docker assets in
quick_pp/app/docker/to start the backend and a Postgres data volume.
Quick docker compose (from repo root):
cd quick_pp/app/docker
docker-compose up -dThis will bring up services configured for development. Logs can be checked
with docker-compose logs -f in the same folder.
Frontend (SvelteKit) The frontend is a SvelteKit application. For the best onboarding experience, use the CLI to start the frontend (see CLI commands below). The CLI handles dependency installation, building, and running the server automatically.
For manual setup or advanced development:
- Install dependencies:
cd quick_pp/app/frontend
npm install
# Ensure Plotly is available for the UI components
npm install plotly.js-dist-min --save- Run the dev server:
npm run dev- Open the frontend at
http://localhost:5173(SvelteKit default).
For production build:
- Run
npm run buildto build the app. - Run
npm run previewto preview the production build locally. - The CLI can start the production server automatically (see CLI commands below).
Start the app using the project CLI
- From the repo root you can use the included CLI which orchestrates backend and frontend processes. Example (starts backend and, if available, frontend production server):
python main.py app
# or (if installed) the user-facing command
quick_pp appStart backend only (dev):
python main.py backend --debug
# or (if installed) the user-facing command
quick_pp backend --debugStart frontend only (dev):
python main.py frontend --dev
# or (if installed) the user-facing command
quick_pp frontend --devStart frontend only (prod):
python main.py frontend
# or (if installed) the user-facing command
quick_pp frontendCommon commands
- Run MLflow tracking UI (local):
python main.py mlflow_server - Deploy model server:
python main.py model_deployment - Train/predict via CLI: see
python main.py --helporquick_pp --help - Manage Docker services:
python main.py docker up -d(seepython main.py docker --helpfor options)
Testing
- Run unit tests with
pytestin the repo root:
pytest -qTroubleshooting & tips
- If the frontend does not render charts, ensure
plotly.js-dist-minis installed inquick_pp/app/frontend(some components do dynamic imports). - If the backend fails to start behind Docker, check
quick_pp/app/docker/.envand the Postgres volumes underquick_pp/app/docker/data/. - Use the CLI
python main.pyfor convenience; it will open browser windows for the services it starts unless--no-openis provided.
Further reading
- API docs are available when the backend is running:
http://localhost:6312/docs. - Project documentation: https://quick-pp.readthedocs.io/en/latest/index.html
License
- See the
LICENSEfile in the repository root.
Contributions and feedback welcome — open an issue or a PR with improvements.
The repository includes several example notebooks under notebooks/ that
demonstrate data handling, EDA and basic petrophysical workflows. Recommended
workflow for exploring the project locally:
- Start the backend API (see CLI commands above) if a notebook calls the API.
- Open a Python environment with the project dependencies installed.
- Launch JupyterLab or Jupyter Notebook and open the notebooks in
notebooks/.
Key notebooks:
01_data_handler.ipynb— create and inspect a mockqpppproject file.02_EDA.ipynb— quick exploratory data analysis patterns used in demos.03_*series — interpretation examples (porosity, saturation, rock typing).
The project includes ML training and prediction utilities integrated with MLflow. High-level steps and helpful details:
-
Prepare input data
- Training expects a Parquet file in
data/input/<data_hash>___.parquet. - The feature set required by each modelling config is defined in
quick_pp/machine_learning/config.py(orMODELLING_CONFIGused by training code). Ensure input columns match the configured features.
- Training expects a Parquet file in
-
Train a model (local)
# from repo root, with virtualenv active
python main.py train <model_config> <data_hash>
# example
python main.py train mock mock- Run predictions
python main.py predict <model_config> <data_hash> [output_name] [--plot]
# example
python main.py predict mock mock results_test --plot- Deploy model server (serves registered MLflow models)
python main.py model_deploymentNotes:
- MLflow UI (tracking server) is available with
python main.py mlflow_server. - The
--plotflag inpredictsaves visual outputs (if supported by the predict pipeline). - For production or reproducible experiments, register models in MLflow and configure the model registry settings used by the deployment code.