Skip to content

Commit

Permalink
feat: add support for markdown prompts (promptfoo#1616)
Browse files Browse the repository at this point in the history
  • Loading branch information
mldangelo authored Sep 7, 2024
1 parent 899297b commit 96306c3
Show file tree
Hide file tree
Showing 8 changed files with 103 additions and 10 deletions.
1 change: 1 addition & 0 deletions .vscode/settings.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
"Envar",
"envars",
"Evals",
"globbed",
"Groq",
"mitigations",
"openai",
Expand Down
5 changes: 5 additions & 0 deletions examples/custom-prompt-function/prompt.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
You're an angry pirate.

Be concise and stay in character.

Tell me about {{topic}}
3 changes: 3 additions & 0 deletions examples/custom-prompt-function/promptfooconfig.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,9 @@ prompts:

- file://./subfolder/*.json

- id: file://prompt.md
label: markdown prompt

- id: file://prompt.jsonl
label: prompt_jsonl
- file://prompt.jsonl
Expand Down
21 changes: 16 additions & 5 deletions site/docs/configuration/parameters.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,7 @@ prompts:
- file://path/to/prompt.json
- file://path/to/prompt.yaml
- file://path/to/prompt.yml
- file://path/to/prompt.md
# Globs are supported
- file://prompts/*.txt
- file://path/**/*
Expand Down Expand Up @@ -131,11 +132,21 @@ Translate the following text to German: "{{name}}: {{text}}"
The prompt separator can be overridden with the `PROMPTFOO_PROMPT_SEPARATOR` environment variable.
:::

### Prompts as Markdown

Prompts as markdown are treated similarly to prompts as raw text. You can define a prompt in a markdown file as:

```markdown title=prompt.md
You are a helpful assistant for Promptfoo. Please answer the following question: {{question}}
```

Note that only one prompt per markdown file is supported.

### Different prompts per model

To set separate prompts for different providers, you can specify the prompt files within the `providers` section of your `promptfooconfig.yaml`. Each provider can have its own set of prompts that are tailored to its specific requirements or input format.

Here's an example of how to set separate prompts for Llama v2 and GPT models:
Here's an example of how to set separate prompts for llama3.1 and GPT-4o models:

```yaml title=promptfooconfig.yaml
prompts:
Expand All @@ -151,15 +162,15 @@ providers:
- id: openai:gpt-4o
prompts:
- gpt_chat_prompt
- id: replicate:meta/llama70b-v2-chat:02e509c789964a7ea8736978a43525956ef40397be9033abf9fd2badfe68c9e3
label: llama70b-v2-chat
- id: replicate:meta/meta-llama-3.1-405b-instruct
label: llama-3.1-405b-instruct
prompts:
- llama_completion_prompt
```

In this configuration, the `gpt_chat_prompt` is used for both GPT-3.5 and GPT-4 models, while the `llama_completion_prompt` is used for the Llama v2 model. The prompts are defined in separate files within the `prompts` directory.
In this configuration, the `gpt_chat_prompt` is used for both GPT-4o and GPT-4o-mini models, while the `llama_completion_prompt` is used for the llama3.1 model. The prompts are defined in separate files within the `prompts` directory.

Make sure to create the corresponding prompt files with the content formatted as expected by each model. For example, GPT models might expect a JSON array of messages, while Llama might expect a plain text format with a specific prefix.
Make sure to create the corresponding prompt files with the content formatted as expected by each model. For example, GPT models might expect a JSON array of messages, while llama might expect a plain text format with a specific prefix.

### Prompt functions

Expand Down
6 changes: 1 addition & 5 deletions src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -54,11 +54,7 @@ async function evaluate(testSuite: EvaluateTestSuite, options: EvaluateOptions =
function: promptInput as PromptFunction,
};
} else if (typeof promptInput === 'string') {
const prompts = await readPrompts(promptInput);
return prompts.map((p) => ({
raw: p.raw,
label: p.label,
}));
return readPrompts(promptInput);
} else {
return {
raw: JSON.stringify(promptInput),
Expand Down
4 changes: 4 additions & 0 deletions src/prompts/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ import { isJavascriptFile, parsePathOrGlob } from '../util';
import { processJsFile } from './processors/javascript';
import { processJsonFile } from './processors/json';
import { processJsonlFile } from './processors/jsonl';
import { processMarkdownFile } from './processors/markdown';
import { processPythonFile } from './processors/python';
import { processString } from './processors/string';
import { processTxtFile } from './processors/text';
Expand Down Expand Up @@ -143,6 +144,9 @@ export async function processPrompt(
if (extension && isJavascriptFile(extension)) {
return processJsFile(filePath, prompt, functionName);
}
if (extension === '.md') {
return processMarkdownFile(filePath, prompt);
}
if (extension === '.py') {
return processPythonFile(filePath, prompt, functionName);
}
Expand Down
12 changes: 12 additions & 0 deletions src/prompts/processors/markdown.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
import fs from 'fs';
import type { Prompt } from '../../types';

export function processMarkdownFile(filePath: string, prompt: Partial<Prompt>): Prompt[] {
const content = fs.readFileSync(filePath, 'utf8');
return [
{
raw: content,
label: prompt.label || `${filePath}: ${content.slice(0, 50)}...`,
},
];
}
61 changes: 61 additions & 0 deletions test/prompts.processors.markdown.test.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
import * as fs from 'fs';
import { processMarkdownFile } from '../src/prompts/processors/markdown';

jest.mock('fs');

describe('processMarkdownFile', () => {
const mockReadFileSync = jest.mocked(fs.readFileSync);

beforeEach(() => {
jest.clearAllMocks();
});

it('should process a valid Markdown file without a label', () => {
const filePath = 'file.md';
const fileContent = '# Heading\n\nThis is some markdown content.';
mockReadFileSync.mockReturnValue(fileContent);
expect(processMarkdownFile(filePath, {})).toEqual([
{
raw: fileContent,
label: `${filePath}: # Heading\n\nThis is some markdown content....`,
},
]);
expect(mockReadFileSync).toHaveBeenCalledWith(filePath, 'utf8');
});

it('should process a valid Markdown file with a label', () => {
const filePath = 'file.md';
const fileContent = '# Heading\n\nThis is some markdown content.';
mockReadFileSync.mockReturnValue(fileContent);
expect(processMarkdownFile(filePath, { label: 'Custom Label' })).toEqual([
{
raw: fileContent,
label: 'Custom Label',
},
]);
expect(mockReadFileSync).toHaveBeenCalledWith(filePath, 'utf8');
});

it('should truncate the label for long Markdown files', () => {
const filePath = 'file.md';
const fileContent = '# ' + 'A'.repeat(100);
mockReadFileSync.mockReturnValue(fileContent);
expect(processMarkdownFile(filePath, {})).toEqual([
{
raw: fileContent,
label: `${filePath}: # ${'A'.repeat(48)}...`,
},
]);
expect(mockReadFileSync).toHaveBeenCalledWith(filePath, 'utf8');
});

it('should throw an error if the file cannot be read', () => {
const filePath = 'nonexistent.md';
mockReadFileSync.mockImplementation(() => {
throw new Error('File not found');
});

expect(() => processMarkdownFile(filePath, {})).toThrow('File not found');
expect(mockReadFileSync).toHaveBeenCalledWith(filePath, 'utf8');
});
});

0 comments on commit 96306c3

Please sign in to comment.