A simple, flexible Python starter kit for working with LLMs using Conda + Ollama or OpenAI.
A portable, reproducible Python environment for working with LLMs (Large Language Models) using both local and cloud-based providers like Ollama and OpenAI.
- ✅ Environment setup using Conda + pip
- ✅ LLMClient abstraction for Ollama and OpenAI
- ✅ .env-based configuration for flexible model switching
- ✅ Includes testing script and interactive Jupyter notebook
- ✅ Clean folder structure with starter files for collaboration
- ✅ Supports streaming JSON responses from Ollama
Binder currently does not support Ollama, which is required to run this project locally with models like orca-mini.
As a result, while the notebook may launch, it will not produce valid LLM responses unless reconfigured to use OpenAI with a valid API key.
✅ To run this project fully, clone the repo and follow the local setup instructions below.
To run locally:
Clone the repository or download the ZIP and extract it:
git clone <your-repo-url>
cd py-envTo set a custom name for your environment, edit the name: field in environment.yml before running the command. Example:
name: my-custom-envThen create the environment:
conda env create -f environment.yml
conda activate py-env💡 To update the environment later (e.g., after adding new packages):
pip install <new-package> pip freeze > requirements.txt conda env update -f environment.yml --prune
Copy the sample environment configuration:
cp .env.sample .envEdit .env to choose which LLM backend you want to use:
USE_OLLAMA=true
OLLAMA_MODEL=orca-mini
USE_OPENAI=falseEnsure Ollama is installed and running: https://ollama.com
USE_OLLAMA=false
USE_OPENAI=true
OPENAI_API_KEY=sk-<your-key>
OPENAI_MODEL=gpt-4🔐 Never commit
.envwith real API keys to Git.
This script calls the LLM using your .env settings:
python test_client.pyExpected output:
LLM Response:
<generated response here>
A notebook (llm_demo.ipynb) is included for interactive exploration:
jupyter notebook llm_demo.ipynb- Cell 1: Instantiates the LLM client from
.env - Cell 2: Sends a test prompt and displays response
Includes Binder setup for zero-install cloud execution:
py-env/
├── .binder/ # Binder environment and build scripts
│ ├── environment.yml
│ └── postBuild
├── .env.sample # Template for environment variables
├── .env.minimal # Minimal config for Ollama-only use
├── .gitignore # Files and folders to exclude from Git
├── environment.yml # Conda + pip environment definition
├── llm_client.py # Reusable class for OpenAI + Ollama
├── llm_demo.ipynb # Jupyter notebook demo
├── requirements.txt # pip dependencies
├── setup_env.sh # Shell script to automate setup
├── test_client.py # Basic test script to verify LLM use
llm_client.pyauto-loads.envand handles JSON streaming from Ollama- Lazy import of
openaiprevents crash if not installed - Use
.env.minimalif you're only using Ollama locally - Jupyter-friendly design makes it easy to test and iterate
- All defaults are safe and extensible
- ✅ Add new prompts in
test_client.pyor the notebook - ✅ Build a CLI around
LLMClient - ✅ Deploy as an API using FastAPI or Flask
- ✅ Add more
.envconfigs for Azure/OpenRouter/etc.
MIT or your preferred license. Attribution appreciated if forked or reused.