A production-grade library for structured LLM prompt engineering.
Explore the docs »
Report Bug
·
Request Feature
Managing prompts for Large Language Models (LLMs) can quickly become messy. Hardcoding prompts as f-strings mixes logic with presentation, lacks validation, and makes it difficult to reuse and test them.
PromptKit solves this by treating your prompts as structured, version-controlled assets. By defining prompts in simple YAML
files, you get:
- Clean Separation: Your prompt templates, logic, and configuration are separate from your application code.
- Safety & Reliability: Built-in validation ensures your prompts receive the correct inputs every time.
- Reusability: Define a prompt once and use it anywhere—in your Python code or directly from the CLI.
- Clarity: A clear, human-readable format for prompts that anyone on your team can understand.
- 📝 Declarative & Structured: Define prompts in simple
YAML
files with powerful Jinja2 templating. - 🔍 Built-in Validation: Use Pydantic schemas to validate prompt inputs before they are ever sent to the LLM.
- 🏗️ Engine Agnostic: A clean engine abstraction layer supports OpenAI, with Ollama and other local models on the way.
- 💰 Cost & Token Estimation: Estimate token counts and potential costs before executing a prompt.
- 🖥️ Powerful CLI: Render, run, and lint prompts directly from your terminal for rapid development and testing.
- 🧪 Fully Tested & Typed: A comprehensive test suite and full type-hinting ensure reliability.
pip install promptkit-core
Create a file named prompts/generate_greeting.yaml
:
# prompts/generate_greeting.yaml
name: generate_greeting
description: "Generates a personalized and professional greeting."
template: |
Hello {{ name }},
Welcome to the team! We are excited to have a {{ role }} with your skills on board.
Best,
The PromptKit Team
input_schema:
name: str
role: str
# main.py
from promptkit.core.loader import load_prompt
from promptkit.core.runner import run_prompt
from promptkit.engines.openai import OpenAIEngine
# Load prompt from YAML (assuming it's in a 'prompts' directory)
prompt = load_prompt("generate_greeting", prompt_dir="prompts")
# Configure engine
engine = OpenAIEngine(api_key="sk-...") # Or load from environment
# Run prompt with validated inputs
response = run_prompt(prompt, {"name": "Alice", "role": "Software Engineer"}, engine)
print(response)
The CLI is perfect for quick tests, rendering, and validation.
# Set the directory where your prompts are stored (optional, can be passed as an argument)
export PROMPTKIT_PROMPT_DIR=./prompts
# Run the prompt directly from the terminal
promptkit run generate_greeting --key "sk-..." --name "Bob" --role "Data Scientist"
# Just render the template to see the output
promptkit render generate_greeting --name "Charlie" --role "Product Manager"
# Lint your YAML file to check for errors
promptkit lint generate_greeting
For detailed usage, advanced features, and API reference, please refer to the Official PromptKit Documentation.
Contributions are welcome! Whether it's a bug report, a new feature, or a documentation improvement, please feel free to open an issue or submit a pull request.
- Fork the repository.
- Create your feature branch (
git checkout -b feature/AmazingFeature
). - Install development dependencies:
pip install -e '.[dev]'
- Commit your changes (
git commit -m 'Add some AmazingFeature'
). - Push to the branch (
git push origin feature/AmazingFeature
). - Open a Pull Request.
Please see the CONTRIBUTING.md
file for more details.
This project is licensed under the MIT License - see the LICENSE file for details.