- 📝 Template rendering - Create prompts using the Nunjucks templating engine
- 💬 Chat messages - Easily generate chat-based prompts for modern LLMs
- 🔧 Filters and extensions - Apply transformations to your prompt content
- ⚡ Caching - Efficiently render prompts by avoiding redundant processing
- 🛠️ Tool calling - First-class support for function calling in LLMs
- 🖼️ Vision support - Add images to prompts for multimodal models
npm install type-promptimport { Prompt } from "type-prompt";
const p = new Prompt("Write a 500-word blog post on {{ topic }}.");
console.log(p.text({ topic: "AI frameworks" }));import { Prompt } from "type-prompt";
const p = new Prompt(`
{% chat role="system" %}
You are a {{ persona }}.
{% endchat %}
{% chat role="user" %}
Hello, how are you?
{% endchat %}
`);
const messages = p.chatMessages({ persona: "helpful assistant" });
// Output:
// [
// { role: 'system', content: 'You are a helpful assistant.' },
// { role: 'user', content: 'Hello, how are you?' }
// ]import { Prompt } from "type-prompt";
const p = new Prompt(`
{% chat role="user" %}
Analyze this book:
{{ book | cache_control("ephemeral") }}
What is the title of this book? Only output the title.
{% endchat %}
`);
const messages = p.chatMessages({ book: "This is a short book!" });
// The book content will be wrapped in a special content block with cache_controlimport { Prompt } from "type-prompt";
function getLaptopInfo() {
/**
* Get information about the user laptop.
*/
return "MacBook Pro, macOS 12.3";
}
const p = new Prompt(`
{% chat role="user" %}
{{ query }}
{{ getLaptopInfo | tool }}
{% endchat %}
`);
const messages = p.chatMessages({
query: "Can you guess the name of my laptop?",
getLaptopInfo,
});
// The tool will be properly formatted for LLM function callingThe main class for creating and rendering prompts.
new Prompt(template: string, options?: {
name?: string;
version?: string;
metadata?: Record<string, any>;
canaryWord?: string;
renderCache?: RenderCache;
})text(data?: Record<string, any>): Render the prompt as plain textchatMessages(data?: Record<string, any>): Render the prompt as an array of chat messagescanaryLeaked(text: string): Check if a canary word has leaked
An asynchronous version of the Prompt class with the same API but providing Promise-based methods.
new AsyncPrompt(template: string, options?: {
name?: string;
version?: string;
metadata?: Record<string, any>;
canaryWord?: string;
renderCache?: RenderCache;
})text(data?: Record<string, any>): Returns a Promise that resolves to the rendered textchatMessages(data?: Record<string, any>): Returns a Promise that resolves to an array of chat messagescanaryLeaked(text: string): Check if a canary word has leaked
cache_control(text: string, cacheType: string = "ephemeral"): Mark text for cachingimage(source: string): Include an image in the prompttool(function: Function): Convert a function to a tool for function calling
chat: Define a chat message blockcompletion: Generate text using an LLM during template rendering
Contributions, issues, and feature requests are welcome! Feel free to check the issues page.
This project is MIT licensed.
Made with ❤️ by Marko