Collection of machine learning examples demonstrating how to build, train, and deploy ML models using the JFrogML Platform.
To get started with these examples:
- Clone this repository
- Navigate to the example project you're interested in
- Follow the README and installation instructions within each project folder
- Python: 3.9-3.11
- JFrog Account: Sign up free
- JFrogML Setup: Installation & Configuration Guide
Click any example below to open a step-by-step guide for building, training, and deploying it.
| Example | Domain | Technology | Description |
|---|---|---|---|
| 💳 Fraud Detection | Financial | CatBoost + XGBoost + RF | Credit card fraud detection with ensemble methods |
| 🛠️ DevOps Helper | DevOps | Fine-tuned Llama/Qwen LLM | DevOps assistant using fine-tuned Llama2 8B and Qwen 1.5B with LoRA |
| 📚 Book Recommender | E-commerce | Content-Based Filtering | ISBN-based book recommendation system using TF-IDF and cosine similarity |
| 🏪 Feature Store Quickstart | Feature Engineering | Spark SQL + Feature Store | Complete guide to JFrogML Feature Store |
| 💰 Financial QA | FinTech | Fine-tuned T5 | Question answering for financial domain using T5 with LoRA |
| 📞 Customer Churn | Telecom | XGBoost | Subscriber churn prediction with gradient boosting |
Pick the workflow that fits your team. Both are production-ready; they differ in how you control builds and versioning.
- Train in a notebook/script and log a framework-native model binary to the JFrogML Registry
- The logged model version includes dependency manifest, serving code, and metadata
- JFrogML packages it into a container image; you deploy the image as realtime/batch/streaming API
- Implement the lifecycle in code (train/initialize/serve) with a
FrogMLModelin your repo - Trigger a Build; JFrogML builds your code, runs training if defined or preloads a binary
- You deploy the Build as realtime/batch/streaming API
| Aspect | 🔬 Artifact-first (Registry) | 🚀 Code-first (FrogMLModel) |
|---|---|---|
| Authoring | Train in notebook/script; produce a model binary | Develop in repo; wrap logic in FrogMLModel |
| What is logged/pushed | Binary model artifact to JFrogML Registry (framework-native: scikit-learn, PyTorch, ONNX, etc.) + dependency manifest, serving code, metadata | Source code pushed/triggered for build (FrogMLModel + repo code); no binary logged at this step |
| Versioning | Versioned ML native artifacts in JFrogML Registry | Versioned Builds in JFrogML |
| Build semantics | Packaging the logged binary into a container image | Build executes your custom workflow; may run training or preload a binary |
| Deployment | Deploy as API (realtime/batch/streaming) from the built image (same after build) | Deploy as API (realtime/batch/streaming) from the built image (same after build) |
| Who drives workflow | Artifact + metadata; platform packages and serves | Your code defines build/train/serve lifecycle |
| Production posture | Production-capable; simpler path with less custom control | Production-capable; greater control and standardization |