|
2 | 2 |
|
3 | 3 | ## Introduction |
4 | 4 |
|
5 | | -Flow Prompt is a dynamic, all-in-one library designed for managing and optimizing prompts for large language models (LLMs) in production and R&D settings. It facilitates budget-aware operations, dynamic data integration, latency and cost metrics visibility, and efficient load distribution across multiple AI models. |
| 5 | +Flow Prompt is a dynamic, all-in-one library designed for managing and optimizing prompts and making tests based on the ideal answer for large language models (LLMs) in production and R&D. It facilitates budget-aware operations, dynamic data integration, latency and cost metrics visibility, and efficient load distribution across multiple AI models. |
6 | 6 |
|
7 | 7 | ## Features |
8 | 8 |
|
| 9 | +- **CI/CD testing**: Generates tests based on the context and ideal answer (usually written by the human). |
9 | 10 | - **Dynamic Prompt Development**: Avoid budget exceptions with dynamic data. |
10 | 11 | - **Multi-Model Support**: Seamlessly integrate with various LLMs like OpenAI, Anthropic, and more. |
11 | 12 | - **Real-Time Insights**: Monitor interactions, request/response metrics in production. |
@@ -91,14 +92,21 @@ flow = FlowPrompt(openai_key='your_api_key', openai_org='your_org') |
91 | 92 |
|
92 | 93 | # Create a prompt |
93 | 94 | prompt = PipePrompt('greet_user') |
94 | | -prompt.add("Hello {name}", role="system") |
| 95 | +prompt.add("You're {name}. Say Hello and ask what's their name.", role="system") |
95 | 96 |
|
96 | 97 | # Call AI model with FlowPrompt |
97 | 98 | context = {"name": "John Doe"} |
98 | | -response = flow.call(prompt.id, context, flow_behaviour) |
| 99 | +# test_data - optional parameter used for generating tests |
| 100 | +response = flow.call(prompt.id, context, flow_behaviour, test_data={ |
| 101 | + 'ideal_answer': "Hello, I'm John Doe. What's your name?", |
| 102 | + 'behavior_name': "gemini" |
| 103 | + } |
| 104 | +) |
99 | 105 | print(response.content) |
100 | 106 | ``` |
101 | | -For more examples, visit Flow Prompt Usage Documentation. |
| 107 | +- To review your created tests and score please go to https://cloud.flow-prompt.com/tests. You can update there Prompt and rerun tests for a published version, or saved version. If you will update and publish version online - library will automatically use the new updated version of the prompt. It's made for updating prompt without redeployment of the code, which is costly operation to do if it's required to update just prompt. |
| 108 | + |
| 109 | +- To review logs please proceed to https://cloud.flow-prompt.com/logs, there you can see metrics like latency, cost, tokens; |
102 | 110 |
|
103 | 111 | ## Best Security Practices |
104 | 112 | For production environments, it is recommended to store secrets securely and not directly in your codebase. Consider using a secret management service or encrypted environment variables. |
|
0 commit comments