-
-
Notifications
You must be signed in to change notification settings - Fork 744
feat: Add comprehensive provider examples for all major LiteLLM providers with clean API structure #983
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add comprehensive provider examples for all major LiteLLM providers with clean API structure #983
Conversation
WalkthroughTwenty-eight new Python example scripts were added, demonstrating usage of the PraisonAI Agent framework with various language model providers and specialized AI agents. Each script initializes an Agent with provider-specific or role-specific configuration and runs three example prompts covering conversational, creative, research, coding, or analytical tasks. Several scripts define specialized AI agents with distinct roles using specific language models. No exported or public entities were introduced except for global Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Agent
participant ProviderLLM
User->>Agent: Initialize with provider-specific or role-specific config
loop For each example prompt
User->>Agent: start(prompt)
Agent->>ProviderLLM: Send prompt
ProviderLLM-->>Agent: Return response
Agent-->>User: Return response
end
Suggested labels
Poem
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @Dhivya-Bharathy, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly expands the library's example coverage by adding comprehensive demonstrations for a wide array of large language model providers. The changes aim to provide clear, standardized usage patterns for integrating and interacting with various LLMs through the praisonaiagents framework, making it easier for developers to get started with different services.
Highlights
- New Provider Examples: This pull request introduces 12 new Python examples demonstrating the usage of
praisonaiagents.Agentwith various major LiteLLM providers, including AWS Bedrock, Azure OpenAI, Cloudflare, Fireworks, Hugging Face, OpenRouter, Perplexity, Replicate, AWS SageMaker, Together AI, Google Vertex AI, and vLLM. - Standardized API Usage: All newly added examples consistently showcase a clean and standardized API structure by initializing
praisonaiagents.Agentwith specificllmparameters (e.g.,bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0,azure/gpt-4) and demonstrating interaction viaagent.start(). - Diverse LLM Capabilities: Each example includes multiple prompts to illustrate different capabilities of the integrated LLMs, such as creative writing, code generation, analytical reasoning, research summarization, and problem-solving.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds a comprehensive set of new examples for various LiteLLM providers. The examples would be more helpful if they printed the agent's response after each agent.start() call.
| response = agent.start("Hello! Can you help me with a writing task?") | ||
|
|
||
| # Example with creative writing | ||
| writing_task = """ | ||
| Write a short story about a time traveler who discovers | ||
| they can only travel to moments of great historical significance. | ||
| Make it engaging and about 200 words. | ||
| """ | ||
|
|
||
| response = agent.start(writing_task) | ||
|
|
||
| # Example with reasoning | ||
| reasoning_task = """ | ||
| Explain the concept of quantum entanglement in simple terms, | ||
| and then discuss its potential applications in quantum computing. | ||
| """ | ||
|
|
||
| response = agent.start(reasoning_task) No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The example does not print the agent's response, making it difficult to see the output. Printing the response after each agent.start() call would make the example more useful.
| response = agent.start("Hello! Can you help me with a writing task?") | |
| # Example with creative writing | |
| writing_task = """ | |
| Write a short story about a time traveler who discovers | |
| they can only travel to moments of great historical significance. | |
| Make it engaging and about 200 words. | |
| """ | |
| response = agent.start(writing_task) | |
| # Example with reasoning | |
| reasoning_task = """ | |
| Explain the concept of quantum entanglement in simple terms, | |
| and then discuss its potential applications in quantum computing. | |
| """ | |
| response = agent.start(reasoning_task) | |
| response = agent.start("Hello! Can you help me with a writing task?") | |
| print(response) | |
| writing_task = """ | |
| Write a short story about a time traveler who discovers | |
| they can only travel to moments of great historical significance. | |
| Make it engaging and about 200 words. | |
| """ | |
| response = agent.start(writing_task) | |
| print(response) | |
| reasoning_task = """ | |
| Explain the concept of quantum entanglement in simple terms, | |
| and then discuss its potential applications in quantum computing. | |
| """ | |
| response = agent.start(reasoning_task) | |
| print(response) |
| response = agent.start("Hello! Can you help me with a coding task?") | ||
|
|
||
| # Example with code generation | ||
| coding_task = """ | ||
| Write a Python function that implements a binary search algorithm. | ||
| Include proper documentation and error handling. | ||
| """ | ||
|
|
||
| response = agent.start(coding_task) | ||
|
|
||
| # Example with analysis | ||
| analysis_task = """ | ||
| Analyze the pros and cons of using microservices architecture | ||
| for a large-scale e-commerce application. | ||
| """ | ||
|
|
||
| response = agent.start(analysis_task) No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The example does not print the agent's response, making it difficult to see the output. Printing the response after each agent.start() call would make the example more useful.
| response = agent.start("Hello! Can you help me with a coding task?") | |
| # Example with code generation | |
| coding_task = """ | |
| Write a Python function that implements a binary search algorithm. | |
| Include proper documentation and error handling. | |
| """ | |
| response = agent.start(coding_task) | |
| # Example with analysis | |
| analysis_task = """ | |
| Analyze the pros and cons of using microservices architecture | |
| for a large-scale e-commerce application. | |
| """ | |
| response = agent.start(analysis_task) | |
| response = agent.start("Hello! Can you help me with a coding task?") | |
| print(response) | |
| coding_task = """ | |
| Write a Python function that implements a binary search algorithm. | |
| Include proper documentation and error handling. | |
| """ | |
| response = agent.start(coding_task) | |
| print(response) | |
| analysis_task = """ | |
| Analyze the pros and cons of using microservices architecture | |
| for a large-scale e-commerce application. | |
| """ | |
| response = agent.start(analysis_task) | |
| print(response) |
| response = agent.start("Hello! Can you help me with a coding task?") | ||
|
|
||
| # Example with code generation | ||
| coding_task = """ | ||
| Write a Python function that implements a binary search algorithm. | ||
| Include proper documentation and error handling. | ||
| """ | ||
|
|
||
| response = agent.start(coding_task) | ||
|
|
||
| # Example with analysis | ||
| analysis_task = """ | ||
| Analyze the pros and cons of using microservices architecture | ||
| for a large-scale e-commerce application. | ||
| """ | ||
|
|
||
| response = agent.start(analysis_task) No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The example does not print the agent's response, making it difficult to see the output. Printing the response after each agent.start() call would make the example more useful.
| response = agent.start("Hello! Can you help me with a coding task?") | |
| # Example with code generation | |
| coding_task = """ | |
| Write a Python function that implements a binary search algorithm. | |
| Include proper documentation and error handling. | |
| """ | |
| response = agent.start(coding_task) | |
| # Example with analysis | |
| analysis_task = """ | |
| Analyze the pros and cons of using microservices architecture | |
| for a large-scale e-commerce application. | |
| """ | |
| response = agent.start(analysis_task) | |
| response = agent.start("Hello! Can you help me with a coding task?") | |
| print(response) | |
| coding_task = """ | |
| Write a Python function that implements a binary search algorithm. | |
| Include proper documentation and error handling. | |
| """ | |
| response = agent.start(coding_task) | |
| print(response) | |
| analysis_task = """ | |
| Analyze the pros and cons of using microservices architecture | |
| for a large-scale e-commerce application. | |
| """ | |
| response = agent.start(analysis_task) | |
| print(response) |
| response = agent.start("Hello! Can you help me with a coding task?") | ||
|
|
||
| # Example with code generation | ||
| coding_task = """ | ||
| Write a Python function that implements a binary search algorithm. | ||
| Include proper documentation and error handling. | ||
| """ | ||
|
|
||
| response = agent.start(coding_task) | ||
|
|
||
| # Example with creative writing | ||
| creative_task = """ | ||
| Write a short story about a time traveler who discovers | ||
| they can only travel to moments of great historical significance. | ||
| """ | ||
|
|
||
| response = agent.start(creative_task) No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The example does not print the agent's response, making it difficult to see the output. Printing the response after each agent.start() call would make the example more useful.
| response = agent.start("Hello! Can you help me with a coding task?") | |
| # Example with code generation | |
| coding_task = """ | |
| Write a Python function that implements a binary search algorithm. | |
| Include proper documentation and error handling. | |
| """ | |
| response = agent.start(coding_task) | |
| # Example with creative writing | |
| creative_task = """ | |
| Write a short story about a time traveler who discovers | |
| they can only travel to moments of great historical significance. | |
| """ | |
| response = agent.start(creative_task) | |
| response = agent.start("Hello! Can you help me with a coding task?") | |
| print(response) | |
| coding_task = """ | |
| Write a Python function that implements a binary search algorithm. | |
| Include proper documentation and error handling. | |
| """ | |
| response = agent.start(coding_task) | |
| print(response) | |
| creative_task = """ | |
| Write a short story about a time traveler who discovers | |
| they can only travel to moments of great historical significance. | |
| """ | |
| response = agent.start(creative_task) | |
| print(response) |
| response = agent.start("Hello! Can you help me with a language translation task?") | ||
|
|
||
| # Example with language translation | ||
| translation_task = """ | ||
| Translate the following text to French and explain any cultural nuances: | ||
| "The early bird catches the worm, but the second mouse gets the cheese." | ||
| """ | ||
|
|
||
| response = agent.start(translation_task) | ||
|
|
||
| # Example with creative content generation | ||
| creative_task = """ | ||
| Write a haiku about artificial intelligence and its impact on society. | ||
| Then explain the symbolism in your haiku. | ||
| """ | ||
|
|
||
| response = agent.start(creative_task) No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The example does not print the agent's response, making it difficult to see the output. Printing the response after each agent.start() call would make the example more useful.
| response = agent.start("Hello! Can you help me with a language translation task?") | |
| # Example with language translation | |
| translation_task = """ | |
| Translate the following text to French and explain any cultural nuances: | |
| "The early bird catches the worm, but the second mouse gets the cheese." | |
| """ | |
| response = agent.start(translation_task) | |
| # Example with creative content generation | |
| creative_task = """ | |
| Write a haiku about artificial intelligence and its impact on society. | |
| Then explain the symbolism in your haiku. | |
| """ | |
| response = agent.start(creative_task) | |
| response = agent.start("Hello! Can you help me with a language translation task?") | |
| print(response) | |
| translation_task = """ | |
| Translate the following text to French and explain any cultural nuances: | |
| \"The early bird catches the worm, but the second mouse gets the cheese.\" | |
| """ | |
| response = agent.start(translation_task) | |
| print(response) | |
| creative_task = """ | |
| Write a haiku about artificial intelligence and its impact on society. | |
| Then explain the symbolism in your haiku. | |
| """ | |
| response = agent.start(creative_task) | |
| print(response) |
| response = agent.start("Hello! Can you help me with a creative task?") | ||
|
|
||
| # Example with creative writing | ||
| creative_task = """ | ||
| Write a short story about a robot learning to paint. | ||
| Make it engaging and about 100 words. | ||
| """ | ||
|
|
||
| response = agent.start(creative_task) | ||
|
|
||
| # Example with problem solving | ||
| problem_task = """ | ||
| Solve this problem step by step: | ||
| If a train travels at 60 mph for 2.5 hours, how far does it travel? | ||
| """ | ||
|
|
||
| response = agent.start(problem_task) No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The example does not print the agent's response, making it difficult to see the output. Printing the response after each agent.start() call would make the example more useful.
| response = agent.start("Hello! Can you help me with a creative task?") | |
| # Example with creative writing | |
| creative_task = """ | |
| Write a short story about a robot learning to paint. | |
| Make it engaging and about 100 words. | |
| """ | |
| response = agent.start(creative_task) | |
| # Example with problem solving | |
| problem_task = """ | |
| Solve this problem step by step: | |
| If a train travels at 60 mph for 2.5 hours, how far does it travel? | |
| """ | |
| response = agent.start(problem_task) | |
| response = agent.start("Hello! Can you help me with a creative task?") | |
| print(response) | |
| creative_task = """ | |
| Write a short story about a robot learning to paint. | |
| Make it engaging and about 100 words. | |
| """ | |
| response = agent.start(creative_task) | |
| print(response) | |
| problem_task = """ | |
| Solve this problem step by step: | |
| If a train travels at 60 mph for 2.5 hours, how far does it travel? | |
| """ | |
| response = agent.start(problem_task) | |
| print(response) |
| response = agent.start("Hello! Can you help me with a business analysis task?") | ||
|
|
||
| # Example with business analysis | ||
| business_task = """ | ||
| Analyze the potential market opportunities for a new AI-powered | ||
| productivity tool targeting remote workers. Include market size, | ||
| competitive landscape, and go-to-market strategy recommendations. | ||
| """ | ||
|
|
||
| response = agent.start(business_task) | ||
|
|
||
| # Example with document summarization | ||
| summary_task = """ | ||
| Summarize the key points from this business proposal: | ||
| Our company proposes to develop an AI-powered customer service chatbot | ||
| that can handle 80% of common customer inquiries automatically. | ||
| """ | ||
|
|
||
| response = agent.start(summary_task) No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The example does not print the agent's response, making it difficult to see the output. Printing the response after each agent.start() call would make the example more useful.
| response = agent.start("Hello! Can you help me with a business analysis task?") | |
| # Example with business analysis | |
| business_task = """ | |
| Analyze the potential market opportunities for a new AI-powered | |
| productivity tool targeting remote workers. Include market size, | |
| competitive landscape, and go-to-market strategy recommendations. | |
| """ | |
| response = agent.start(business_task) | |
| # Example with document summarization | |
| summary_task = """ | |
| Summarize the key points from this business proposal: | |
| Our company proposes to develop an AI-powered customer service chatbot | |
| that can handle 80% of common customer inquiries automatically. | |
| """ | |
| response = agent.start(summary_task) | |
| response = agent.start("Hello! Can you help me with a business analysis task?") | |
| print(response) | |
| business_task = """ | |
| Analyze the potential market opportunities for a new AI-powered | |
| productivity tool targeting remote workers. Include market size, | |
| competitive landscape, and go-to-market strategy recommendations. | |
| """ | |
| response = agent.start(business_task) | |
| print(response) | |
| summary_task = """ | |
| Summarize the key points from this business proposal: | |
| Our company proposes to develop an AI-powered customer service chatbot | |
| that can handle 80% of common customer inquiries automatically. | |
| """ | |
| response = agent.start(summary_task) | |
| print(response) |
| response = agent.start("Hello! Can you help me with a research task?") | ||
|
|
||
| # Example with research and analysis | ||
| research_task = """ | ||
| Research and provide insights on the latest developments in | ||
| renewable energy technology, focusing on solar and wind power innovations. | ||
| """ | ||
|
|
||
| response = agent.start(research_task) | ||
|
|
||
| # Example with creative content | ||
| creative_task = """ | ||
| Write a haiku about artificial intelligence and its impact on society. | ||
| Then explain the symbolism in your haiku. | ||
| """ | ||
|
|
||
| response = agent.start(creative_task) No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The example does not print the agent's response, making it difficult to see the output. Printing the response after each agent.start() call would make the example more useful.
| response = agent.start("Hello! Can you help me with a research task?") | |
| # Example with research and analysis | |
| research_task = """ | |
| Research and provide insights on the latest developments in | |
| renewable energy technology, focusing on solar and wind power innovations. | |
| """ | |
| response = agent.start(research_task) | |
| # Example with creative content | |
| creative_task = """ | |
| Write a haiku about artificial intelligence and its impact on society. | |
| Then explain the symbolism in your haiku. | |
| """ | |
| response = agent.start(creative_task) | |
| response = agent.start("Hello! Can you help me with a research task?") | |
| print(response) | |
| research_task = """ | |
| Research and provide insights on the latest developments in | |
| renewable energy technology, focusing on solar and wind power innovations. | |
| """ | |
| response = agent.start(research_task) | |
| print(response) | |
| creative_task = """ | |
| Write a haiku about artificial intelligence and its impact on society. | |
| Then explain the symbolism in your haiku. | |
| """ | |
| response = agent.start(creative_task) | |
| print(response) |
| response = agent.start("Hello! Can you help me with a research task?") | ||
|
|
||
| # Example with research and analysis | ||
| research_task = """ | ||
| Research and provide insights on the latest developments in | ||
| renewable energy technology, focusing on solar and wind power innovations. | ||
| """ | ||
|
|
||
| response = agent.start(research_task) | ||
|
|
||
| # Example with multimodal capabilities (text-based for now) | ||
| multimodal_task = """ | ||
| Describe how you would analyze an image of a city skyline | ||
| and provide insights about urban development patterns. | ||
| """ | ||
|
|
||
| response = agent.start(multimodal_task) No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The example does not print the agent's response, making it difficult to see the output. Printing the response after each agent.start() call would make the example more useful.
| response = agent.start("Hello! Can you help me with a research task?") | |
| # Example with research and analysis | |
| research_task = """ | |
| Research and provide insights on the latest developments in | |
| renewable energy technology, focusing on solar and wind power innovations. | |
| """ | |
| response = agent.start(research_task) | |
| # Example with multimodal capabilities (text-based for now) | |
| multimodal_task = """ | |
| Describe how you would analyze an image of a city skyline | |
| and provide insights about urban development patterns. | |
| """ | |
| response = agent.start(multimodal_task) | |
| response = agent.start("Hello! Can you help me with a research task?") | |
| print(response) | |
| research_task = """ | |
| Research and provide insights on the latest developments in | |
| renewable energy technology, focusing on solar and wind power innovations. | |
| """ | |
| response = agent.start(research_task) | |
| print(response) | |
| multimodal_task = """ | |
| Describe how you would analyze an image of a city skyline | |
| and provide insights about urban development patterns. | |
| """ | |
| response = agent.start(multimodal_task) | |
| print(response) |
| response = agent.start("Hello! Can you help me with a mathematical problem?") | ||
|
|
||
| # Example with mathematical reasoning | ||
| math_task = """ | ||
| Solve this calculus problem step by step: | ||
| Find the derivative of f(x) = x^3 * e^(2x) using the product rule. | ||
| """ | ||
|
|
||
| response = agent.start(math_task) | ||
|
|
||
| # Example with code optimization | ||
| code_task = """ | ||
| Optimize this Python function for better performance: | ||
| def find_duplicates(arr): | ||
| duplicates = [] | ||
| for i in range(len(arr)): | ||
| for j in range(i+1, len(arr)): | ||
| if arr[i] == arr[j] and arr[i] not in duplicates: | ||
| duplicates.append(arr[i]) | ||
| return duplicates | ||
| """ | ||
|
|
||
| response = agent.start(code_task) No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The example does not print the agent's response, making it difficult to see the output. Printing the response after each agent.start() call would make the example more useful.
response = agent.start("Hello! Can you help me with a mathematical problem?")
print(response)
math_task = """
Solve this calculus problem step by step:
Find the derivative of f(x) = x^3 * e^(2x) using the product rule.
"""
response = agent.start(math_task)
print(response)
code_task = """
Optimize this Python function for better performance:
def find_duplicates(arr):
duplicates = []
for i in range(len(arr)):
for j in range(i+1, len(arr)):
if arr[i] == arr[j] and arr[i] not in duplicates:
duplicates.append(arr[i])
return duplicates
"""
response = agent.start(code_task)
print(response)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Nitpick comments (4)
examples/python/providers/openrouter/openrouter_example.py (2)
13-15: Responses are discarded – print or log themThe example overwrites
responsethree times without using it, so users see no output.-response = agent.start("Hello! Can you help me with a creative writing task?") +print(agent.start("Hello! Can you help me with a creative writing task?")) ... -response = agent.start(writing_task) +print(agent.start(writing_task)) ... -response = agent.start(reasoning_task) +print(agent.start(reasoning_task))Also applies to: 23-24, 31-31
17-22: Trim leading newline inside triple-quoted promptA newline immediately after
"""becomes part of the prompt and may shift model formatting:-writing_task = """ +writing_task = """Write a short story about a time traveler who discoversSame applies to
reasoning_task.examples/python/providers/vllm/vllm_example.py (2)
13-15: Print the agent outputsSame issue as the OpenRouter example – the script does not surface any results.
-response = agent.start("Hello! Can you help me with a mathematical problem?") +print(agent.start("Hello! Can you help me with a mathematical problem?")) ... -response = agent.start(math_task) +print(agent.start(math_task)) ... -response = agent.start(code_task) +print(agent.start(code_task))Also applies to: 22-23, 36-36
24-34: Provide a realistic, optimised solution hintThe optimisation task would be more useful if the example also showed an improved function, allowing users to compare input vs. output. Consider appending something like:
optimised = agent.start(code_task) print("Optimised implementation:\n", optimised)Helps demonstrate the agent’s coding capability.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
examples/python/providers/aws/aws_bedrock_example.py(1 hunks)examples/python/providers/azure/azure_openai_example.py(1 hunks)examples/python/providers/cloudflare/cloudflare_example.py(1 hunks)examples/python/providers/fireworks/fireworks_example.py(1 hunks)examples/python/providers/huggingface/huggingface_example.py(1 hunks)examples/python/providers/openrouter/openrouter_example.py(1 hunks)examples/python/providers/perplexity/perplexity_example.py(1 hunks)examples/python/providers/replicate/replicate_example.py(1 hunks)examples/python/providers/sagemaker/sagemaker_example.py(1 hunks)examples/python/providers/together/together_ai_example.py(1 hunks)examples/python/providers/vertex/vertex_example.py(1 hunks)examples/python/providers/vllm/vllm_example.py(1 hunks)
🧰 Additional context used
🧠 Learnings (13)
📓 Common learnings
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the unified LLM wrapper in `praisonaiagents/llm/` for integrating with multiple LLM providers.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the `Agent` class from `praisonaiagents/agent/` for core agent implementations, supporting LLM integration, tool calling, and self-reflection.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/agents.ts : The 'PraisonAIAgents' class in 'src/agents/agents.ts' should manage multiple agents, tasks, memory, and process type, mirroring the Python 'agents.py'.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.windsurfrules:0-0
Timestamp: 2025-06-30T10:06:44.129Z
Learning: Applies to src/praisonai-ts/src/{llm,agent,agents,task}/**/*.ts : Use the 'aisdk' library for all large language model (LLM) calls in TypeScript, such as using 'generateText' for text generation.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use conda environment activation (`conda activate praisonai-agents`) before running development or tests.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agent/agent.ts : The 'Agent' class in 'src/agent/agent.ts' should encapsulate a single agent's role, name, and methods for calling the LLM using 'aisdk'.
examples/python/providers/azure/azure_openai_example.py (9)
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/agents.ts : The 'PraisonAIAgents' class in 'src/agents/agents.ts' should manage multiple agents, tasks, memory, and process type, mirroring the Python 'agents.py'.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/autoagents.ts : The 'AutoAgents' class in 'src/agents/autoagents.ts' should provide high-level convenience for automatically generating agent/task configuration from user instructions, using 'aisdk' to parse config.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the `Agent` class from `praisonaiagents/agent/` for core agent implementations, supporting LLM integration, tool calling, and self-reflection.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/index.ts : The main entry point 'src/index.ts' should re-export key classes and functions (such as 'Agent', 'Agents', 'Task', etc.) for easy import by consumers.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agent/agent.ts : The 'Agent' class in 'src/agent/agent.ts' should encapsulate a single agent's role, name, and methods for calling the LLM using 'aisdk'.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Applies to src/praisonai-agents/tests/**/*.py : Test files should be placed in the `tests/` directory and demonstrate specific usage patterns, serving as both test and documentation.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use conda environment activation (`conda activate praisonai-agents`) before running development or tests.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the `Task` class from `praisonaiagents/task/` for defining tasks, supporting context, callbacks, output specifications, and guardrails.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.windsurfrules:0-0
Timestamp: 2025-06-30T10:06:44.129Z
Learning: Applies to src/praisonai-ts/src/{llm,agent,agents,task}/**/*.ts : Use the 'aisdk' library for all large language model (LLM) calls in TypeScript, such as using 'generateText' for text generation.
examples/python/providers/openrouter/openrouter_example.py (5)
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the `Agent` class from `praisonaiagents/agent/` for core agent implementations, supporting LLM integration, tool calling, and self-reflection.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the `Task` class from `praisonaiagents/task/` for defining tasks, supporting context, callbacks, output specifications, and guardrails.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/autoagents.ts : The 'AutoAgents' class in 'src/agents/autoagents.ts' should provide high-level convenience for automatically generating agent/task configuration from user instructions, using 'aisdk' to parse config.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/agents.ts : The 'PraisonAIAgents' class in 'src/agents/agents.ts' should manage multiple agents, tasks, memory, and process type, mirroring the Python 'agents.py'.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use conda environment activation (`conda activate praisonai-agents`) before running development or tests.
examples/python/providers/replicate/replicate_example.py (4)
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the `Agent` class from `praisonaiagents/agent/` for core agent implementations, supporting LLM integration, tool calling, and self-reflection.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/agents.ts : The 'PraisonAIAgents' class in 'src/agents/agents.ts' should manage multiple agents, tasks, memory, and process type, mirroring the Python 'agents.py'.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/autoagents.ts : The 'AutoAgents' class in 'src/agents/autoagents.ts' should provide high-level convenience for automatically generating agent/task configuration from user instructions, using 'aisdk' to parse config.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the `Task` class from `praisonaiagents/task/` for defining tasks, supporting context, callbacks, output specifications, and guardrails.
examples/python/providers/sagemaker/sagemaker_example.py (8)
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the `Agent` class from `praisonaiagents/agent/` for core agent implementations, supporting LLM integration, tool calling, and self-reflection.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agent/agent.ts : The 'Agent' class in 'src/agent/agent.ts' should encapsulate a single agent's role, name, and methods for calling the LLM using 'aisdk'.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/agents.ts : The 'PraisonAIAgents' class in 'src/agents/agents.ts' should manage multiple agents, tasks, memory, and process type, mirroring the Python 'agents.py'.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the unified LLM wrapper in `praisonaiagents/llm/` for integrating with multiple LLM providers.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.windsurfrules:0-0
Timestamp: 2025-06-30T10:06:44.129Z
Learning: Applies to src/praisonai-ts/src/{llm,agent,agents,task}/**/*.ts : Use the 'aisdk' library for all large language model (LLM) calls in TypeScript, such as using 'generateText' for text generation.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/autoagents.ts : The 'AutoAgents' class in 'src/agents/autoagents.ts' should provide high-level convenience for automatically generating agent/task configuration from user instructions, using 'aisdk' to parse config.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the `Task` class from `praisonaiagents/task/` for defining tasks, supporting context, callbacks, output specifications, and guardrails.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use conda environment activation (`conda activate praisonai-agents`) before running development or tests.
examples/python/providers/perplexity/perplexity_example.py (7)
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the unified LLM wrapper in `praisonaiagents/llm/` for integrating with multiple LLM providers.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the `Agent` class from `praisonaiagents/agent/` for core agent implementations, supporting LLM integration, tool calling, and self-reflection.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use conda environment activation (`conda activate praisonai-agents`) before running development or tests.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/autoagents.ts : The 'AutoAgents' class in 'src/agents/autoagents.ts' should provide high-level convenience for automatically generating agent/task configuration from user instructions, using 'aisdk' to parse config.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/agents.ts : The 'PraisonAIAgents' class in 'src/agents/agents.ts' should manage multiple agents, tasks, memory, and process type, mirroring the Python 'agents.py'.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the `Task` class from `praisonaiagents/task/` for defining tasks, supporting context, callbacks, output specifications, and guardrails.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Applies to src/praisonai-agents/tests/**/*.py : Test files should be placed in the `tests/` directory and demonstrate specific usage patterns, serving as both test and documentation.
examples/python/providers/vllm/vllm_example.py (9)
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the unified LLM wrapper in `praisonaiagents/llm/` for integrating with multiple LLM providers.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/llm/llm.ts : The 'LLM' class in 'llm.ts' should wrap 'aisdk.generateText' calls for generating text responses.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the `Agent` class from `praisonaiagents/agent/` for core agent implementations, supporting LLM integration, tool calling, and self-reflection.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/llm/llm.ts : Replace all references to 'LLM' or 'litellm' with 'aisdk' usage for large language model calls in Node.js/TypeScript code.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agent/agent.ts : The 'Agent' class in 'src/agent/agent.ts' should encapsulate a single agent's role, name, and methods for calling the LLM using 'aisdk'.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use conda environment activation (`conda activate praisonai-agents`) before running development or tests.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/agents.ts : The 'PraisonAIAgents' class in 'src/agents/agents.ts' should manage multiple agents, tasks, memory, and process type, mirroring the Python 'agents.py'.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the `Task` class from `praisonaiagents/task/` for defining tasks, supporting context, callbacks, output specifications, and guardrails.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/autoagents.ts : The 'AutoAgents' class in 'src/agents/autoagents.ts' should provide high-level convenience for automatically generating agent/task configuration from user instructions, using 'aisdk' to parse config.
examples/python/providers/cloudflare/cloudflare_example.py (8)
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the `Agent` class from `praisonaiagents/agent/` for core agent implementations, supporting LLM integration, tool calling, and self-reflection.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/autoagents.ts : The 'AutoAgents' class in 'src/agents/autoagents.ts' should provide high-level convenience for automatically generating agent/task configuration from user instructions, using 'aisdk' to parse config.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use conda environment activation (`conda activate praisonai-agents`) before running development or tests.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the `Task` class from `praisonaiagents/task/` for defining tasks, supporting context, callbacks, output specifications, and guardrails.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/agents.ts : The 'PraisonAIAgents' class in 'src/agents/agents.ts' should manage multiple agents, tasks, memory, and process type, mirroring the Python 'agents.py'.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Applies to src/praisonai-agents/tests/**/*.py : Test files should be placed in the `tests/` directory and demonstrate specific usage patterns, serving as both test and documentation.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.windsurfrules:0-0
Timestamp: 2025-06-30T10:06:44.129Z
Learning: Applies to src/praisonai-ts/src/{llm,agent,agents,task}/**/*.ts : Use the 'aisdk' library for all large language model (LLM) calls in TypeScript, such as using 'generateText' for text generation.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agent/agent.ts : The 'Agent' class in 'src/agent/agent.ts' should encapsulate a single agent's role, name, and methods for calling the LLM using 'aisdk'.
examples/python/providers/together/together_ai_example.py (11)
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/llm/llm.ts : The 'LLM' class in 'llm.ts' should wrap 'aisdk.generateText' calls for generating text responses.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/autoagents.ts : The 'AutoAgents' class in 'src/agents/autoagents.ts' should provide high-level convenience for automatically generating agent/task configuration from user instructions, using 'aisdk' to parse config.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.windsurfrules:0-0
Timestamp: 2025-06-30T10:06:44.129Z
Learning: Applies to src/praisonai-ts/src/{llm,agent,agents,task}/**/*.ts : Use the 'aisdk' library for all large language model (LLM) calls in TypeScript, such as using 'generateText' for text generation.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agent/agent.ts : The 'Agent' class in 'src/agent/agent.ts' should encapsulate a single agent's role, name, and methods for calling the LLM using 'aisdk'.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the unified LLM wrapper in `praisonaiagents/llm/` for integrating with multiple LLM providers.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/agents.ts : The 'PraisonAIAgents' class in 'src/agents/agents.ts' should manage multiple agents, tasks, memory, and process type, mirroring the Python 'agents.py'.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the `Agent` class from `praisonaiagents/agent/` for core agent implementations, supporting LLM integration, tool calling, and self-reflection.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/llm/llm.ts : Replace all references to 'LLM' or 'litellm' with 'aisdk' usage for large language model calls in Node.js/TypeScript code.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use conda environment activation (`conda activate praisonai-agents`) before running development or tests.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the `Task` class from `praisonaiagents/task/` for defining tasks, supporting context, callbacks, output specifications, and guardrails.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Applies to src/praisonai-agents/tests/**/*.py : Test files should be placed in the `tests/` directory and demonstrate specific usage patterns, serving as both test and documentation.
examples/python/providers/vertex/vertex_example.py (5)
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the `Agent` class from `praisonaiagents/agent/` for core agent implementations, supporting LLM integration, tool calling, and self-reflection.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/autoagents.ts : The 'AutoAgents' class in 'src/agents/autoagents.ts' should provide high-level convenience for automatically generating agent/task configuration from user instructions, using 'aisdk' to parse config.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use conda environment activation (`conda activate praisonai-agents`) before running development or tests.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/agents.ts : The 'PraisonAIAgents' class in 'src/agents/agents.ts' should manage multiple agents, tasks, memory, and process type, mirroring the Python 'agents.py'.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the `Task` class from `praisonaiagents/task/` for defining tasks, supporting context, callbacks, output specifications, and guardrails.
examples/python/providers/fireworks/fireworks_example.py (6)
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the `Agent` class from `praisonaiagents/agent/` for core agent implementations, supporting LLM integration, tool calling, and self-reflection.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the `Task` class from `praisonaiagents/task/` for defining tasks, supporting context, callbacks, output specifications, and guardrails.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use conda environment activation (`conda activate praisonai-agents`) before running development or tests.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/autoagents.ts : The 'AutoAgents' class in 'src/agents/autoagents.ts' should provide high-level convenience for automatically generating agent/task configuration from user instructions, using 'aisdk' to parse config.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/agents.ts : The 'PraisonAIAgents' class in 'src/agents/agents.ts' should manage multiple agents, tasks, memory, and process type, mirroring the Python 'agents.py'.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Applies to src/praisonai-agents/tests/**/*.py : Test files should be placed in the `tests/` directory and demonstrate specific usage patterns, serving as both test and documentation.
examples/python/providers/aws/aws_bedrock_example.py (5)
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the `Agent` class from `praisonaiagents/agent/` for core agent implementations, supporting LLM integration, tool calling, and self-reflection.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the `Task` class from `praisonaiagents/task/` for defining tasks, supporting context, callbacks, output specifications, and guardrails.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/autoagents.ts : The 'AutoAgents' class in 'src/agents/autoagents.ts' should provide high-level convenience for automatically generating agent/task configuration from user instructions, using 'aisdk' to parse config.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use conda environment activation (`conda activate praisonai-agents`) before running development or tests.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/agents.ts : The 'PraisonAIAgents' class in 'src/agents/agents.ts' should manage multiple agents, tasks, memory, and process type, mirroring the Python 'agents.py'.
examples/python/providers/huggingface/huggingface_example.py (11)
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the `Agent` class from `praisonaiagents/agent/` for core agent implementations, supporting LLM integration, tool calling, and self-reflection.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the unified LLM wrapper in `praisonaiagents/llm/` for integrating with multiple LLM providers.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/llm/llm.ts : The 'LLM' class in 'llm.ts' should wrap 'aisdk.generateText' calls for generating text responses.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.windsurfrules:0-0
Timestamp: 2025-06-30T10:06:44.129Z
Learning: Applies to src/praisonai-ts/src/{llm,agent,agents,task}/**/*.ts : Use the 'aisdk' library for all large language model (LLM) calls in TypeScript, such as using 'generateText' for text generation.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agent/agent.ts : The 'Agent' class in 'src/agent/agent.ts' should encapsulate a single agent's role, name, and methods for calling the LLM using 'aisdk'.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/autoagents.ts : The 'AutoAgents' class in 'src/agents/autoagents.ts' should provide high-level convenience for automatically generating agent/task configuration from user instructions, using 'aisdk' to parse config.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/agents.ts : The 'PraisonAIAgents' class in 'src/agents/agents.ts' should manage multiple agents, tasks, memory, and process type, mirroring the Python 'agents.py'.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/llm/llm.ts : Replace all references to 'LLM' or 'litellm' with 'aisdk' usage for large language model calls in Node.js/TypeScript code.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Applies to src/praisonai-agents/praisonaiagents/{memory,knowledge}/**/*.py : Place memory-related implementations in `praisonaiagents/memory/` and knowledge/document processing in `praisonaiagents/knowledge/`.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use conda environment activation (`conda activate praisonai-agents`) before running development or tests.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the `Task` class from `praisonaiagents/task/` for defining tasks, supporting context, callbacks, output specifications, and guardrails.
🧬 Code Graph Analysis (8)
examples/python/providers/azure/azure_openai_example.py (1)
src/praisonai-agents/praisonaiagents/llm/llm.py (1)
response(2171-2267)
examples/python/providers/replicate/replicate_example.py (1)
src/praisonai-agents/praisonaiagents/llm/llm.py (1)
response(2171-2267)
examples/python/providers/perplexity/perplexity_example.py (1)
src/praisonai-agents/praisonaiagents/llm/llm.py (1)
response(2171-2267)
examples/python/providers/vllm/vllm_example.py (1)
src/praisonai-agents/praisonaiagents/llm/llm.py (1)
response(2171-2267)
examples/python/providers/cloudflare/cloudflare_example.py (1)
src/praisonai-agents/praisonaiagents/llm/llm.py (1)
response(2171-2267)
examples/python/providers/together/together_ai_example.py (1)
src/praisonai-agents/praisonaiagents/llm/llm.py (1)
response(2171-2267)
examples/python/providers/vertex/vertex_example.py (1)
src/praisonai-agents/praisonaiagents/llm/llm.py (1)
response(2171-2267)
examples/python/providers/huggingface/huggingface_example.py (1)
src/praisonai-agents/praisonaiagents/llm/llm.py (1)
response(2171-2267)
🔇 Additional comments (11)
examples/python/providers/azure/azure_openai_example.py (1)
1-30: Well-structured provider example following established patterns.The Azure OpenAI example demonstrates the unified Agent interface effectively with diverse use cases covering conversational, coding, and analytical tasks. The model specification follows the correct LiteLLM format and the examples provide good educational value.
examples/python/providers/vertex/vertex_example.py (1)
1-30: Excellent provider example with unique multimodal demonstration.The Vertex AI example follows the unified Agent interface pattern effectively. The inclusion of a multimodal task (lines 24-28) is particularly well-thought-out, showcasing Gemini's capabilities while keeping the example accessible through text-based interaction.
examples/python/providers/sagemaker/sagemaker_example.py (1)
1-32: Well-crafted provider example with business-focused use cases.The SageMaker example follows the unified Agent interface pattern effectively. The business-focused examples (market analysis on lines 17-21 and document summarization on lines 26-30) are particularly well-suited for SageMaker's typical enterprise applications.
examples/python/providers/together/together_ai_example.py (1)
1-30: Consistent provider example with diverse task demonstrations.The Together AI example follows the unified Agent interface pattern effectively. The combination of research and creative tasks (lines 17-20 and 25-28) provides good coverage of different LLM capabilities, demonstrating both analytical and creative use cases.
examples/python/providers/huggingface/huggingface_example.py (1)
1-30: Excellent provider example with unique translation demonstration.The Hugging Face example follows the unified Agent interface pattern effectively. The translation task with cultural nuances (lines 17-20) is particularly well-crafted, showcasing language capabilities beyond simple translation and providing educational value about cultural context.
examples/python/providers/replicate/replicate_example.py (1)
10-11: Confirm Replicate model identifier
Please ensure the string passed to LiteLLM’sReplicatebackend exactly matches the model slug on Replicate; any typo will silently fall back to the default OpenAI model.• File: examples/python/providers/replicate/replicate_example.py (lines 10–11)
• Current value:"replicate/meta/llama-3.1-8b-instruct"Verify against the official model page on Replicate and update if needed.
examples/python/providers/cloudflare/cloudflare_example.py (1)
10-11: No change needed for Cloudflare model slugThe
@cfprefix is required by LiteLLM’s Cloudflare adapter. The slugmodel="cloudflare/@cf/meta/llama-3.1-8b-instruct"is correct and matches the documented format for Cloudflare Workers AI models.
Likely an incorrect or invalid review comment.
examples/python/providers/aws/aws_bedrock_example.py (1)
10-11: Confirm Bedrock model identifier availabilityThe slug
bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0is used in multiple examples—please ensure it’s supported in your target region:• examples/python/models/aws-bedrock.py
• examples/python/providers/aws/aws_bedrock_example.py (around lines 10–11)You can verify availability by running:
aws bedrock list-models --region your-regionor by checking the AWS Console under Bedrock models.
examples/python/providers/fireworks/fireworks_example.py (1)
10-11: Verify Fireworks model slugThe reference
• examples/python/providers/fireworks/fireworks_example.py (lines 10–11):llm="fireworks/accounts/fireworks/models/llama-v3-8b-instruct",only appears here. Please confirm that this exact slug matches the provider’s official model catalog (e.g., via the Fireworks API or documentation) to avoid runtime errors.
examples/python/providers/openrouter/openrouter_example.py (1)
5-11: Import and model identifier are correctBoth checks pass:
- The
Agentclass is re-exported inpraisonaiagents/__init__.py, so
from praisonaiagents import Agentis valid.- The model string
"openrouter/meta-llama/llama-3.1-8b-instruct"matches other examples inexamples/python/providers/openrouter/and doesn’t require a suffix (e.g.,:free).No changes needed.
examples/python/providers/vllm/vllm_example.py (1)
5-11: Agent import is correct; please verify vLLM model string with your adapter
- Agent is exported from the package root (
from praisonaiagents import Agent) and matches__all__insrc/praisonai-agents/praisonaiagents/__init__.py.- The
llm="vllm/meta-llama/Llama-3.1-8B-Instruct"string is passed through to the external LiteLLM-vLLM adapter. No patterns forvllm/ormeta-llamaare defined in this codebase, so please confirm against the adapter’s documentation (case sensitivity, dashes, provider/model format).
| """ | ||
| Basic example of using Replicate with PraisonAI | ||
| """ | ||
|
|
||
| from praisonaiagents import Agent | ||
|
|
||
| # Initialize Agent with Replicate | ||
| agent = Agent( | ||
| instructions="You are a helpful assistant", | ||
| llm="replicate/meta/llama-3.1-8b-instruct", | ||
| ) | ||
|
|
||
| # Example conversation | ||
| response = agent.start("Hello! Can you help me with a creative task?") | ||
|
|
||
| # Example with creative writing | ||
| creative_task = """ | ||
| Write a short story about a robot learning to paint. | ||
| Make it engaging and about 100 words. | ||
| """ | ||
|
|
||
| response = agent.start(creative_task) | ||
|
|
||
| # Example with problem solving | ||
| problem_task = """ | ||
| Solve this problem step by step: | ||
| If a train travels at 60 mph for 2.5 hours, how far does it travel? | ||
| """ | ||
|
|
||
| response = agent.start(problem_task) No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Wrap example logic in a main() guard
Importing this file will immediately execute three remote LLM calls – undesirable for libraries, tests, and docs tooling.
Encapsulate the demo in a main() function and protect it with the conventional entry-point guard.
@@
-from praisonaiagents import Agent
+# Example usage is wrapped in a `main()` so that it only
+# runs when the file is executed directly.
+from praisonaiagents import Agent
+
+
+def main() -> None:
+ # Initialize Agent with Replicate
+ agent = Agent(
+ instructions="You are a helpful assistant",
+ llm="replicate/meta/llama-3.1-8b-instruct",
+ )
+
+ # Example conversation
+ print(agent.start("Hello! Can you help me with a creative task?"))
+
+ # Example with creative writing
+ creative_task = (
+ "Write a short story (≈100 words) about a robot learning to paint."
+ )
+ print(agent.start(creative_task))
+
+ # Example with problem solving
+ problem_task = (
+ "Solve step-by-step: If a train travels at 60 mph for 2.5 h, "
+ "how far does it travel?"
+ )
+ print(agent.start(problem_task))
+
+
+if __name__ == "__main__":
+ main()Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In examples/python/providers/replicate/replicate_example.py lines 1 to 30, the
example code runs immediately upon import, causing unintended remote LLM calls.
To fix this, wrap all example logic inside a main() function and add the
standard if __name__ == "__main__": guard to call main(), preventing execution
during import.
| """ | ||
| Basic example of using Cloudflare AI with PraisonAI | ||
| """ | ||
|
|
||
| from praisonaiagents import Agent | ||
|
|
||
| # Initialize Agent with Cloudflare AI | ||
| agent = Agent( | ||
| instructions="You are a helpful assistant", | ||
| llm="cloudflare/@cf/meta/llama-3.1-8b-instruct", | ||
| ) | ||
|
|
||
| # Example conversation | ||
| response = agent.start("Hello! Can you help me with a coding task?") | ||
|
|
||
| # Example with code generation | ||
| coding_task = """ | ||
| Write a Python function that implements a binary search algorithm. | ||
| Include proper documentation and error handling. | ||
| """ | ||
|
|
||
| response = agent.start(coding_task) | ||
|
|
||
| # Example with analysis | ||
| analysis_task = """ | ||
| Analyze the pros and cons of using microservices architecture | ||
| for a large-scale e-commerce application. | ||
| """ | ||
|
|
||
| response = agent.start(analysis_task) No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Move example code behind if __name__ == "__main__":
Same motivation as above – prevents accidental execution and side-effects on import.
-from praisonaiagents import Agent
+from praisonaiagents import Agent
+
+
+def main() -> None:
+ agent = Agent(
+ instructions="You are a helpful assistant",
+ llm="cloudflare/@cf/meta/llama-3.1-8b-instruct",
+ )
+
+ print(agent.start("Hello! Can you help me with a coding task?"))
+
+ coding_task = (
+ "Write a Python binary-search function with docstrings and "
+ "error handling."
+ )
+ print(agent.start(coding_task))
+
+ analysis_task = (
+ "Analyse pros and cons of micro-services architecture for a large "
+ "e-commerce platform."
+ )
+ print(agent.start(analysis_task))
+
+
+if __name__ == "__main__":
+ main()Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In examples/python/providers/cloudflare/cloudflare_example.py lines 1 to 30, the
example code runs immediately on import, which can cause unintended side
effects. To fix this, wrap all the example code starting from the Agent
initialization through the example conversations inside an if __name__ ==
"__main__": block to ensure it only executes when the script is run directly.
| """ | ||
| Basic example of using Perplexity with PraisonAI | ||
| """ | ||
|
|
||
| from praisonaiagents import Agent | ||
|
|
||
| # Initialize Agent with Perplexity | ||
| agent = Agent( | ||
| instructions="You are a helpful assistant", | ||
| llm="perplexity/llama-3.1-8b-instruct", | ||
| ) | ||
|
|
||
| # Example conversation | ||
| response = agent.start("Hello! Can you help me with a research task?") | ||
|
|
||
| # Example with research and analysis | ||
| research_task = """ | ||
| Research and provide insights on the latest developments in | ||
| artificial intelligence, focusing on recent breakthroughs and trends. | ||
| """ | ||
|
|
||
| response = agent.start(research_task) | ||
|
|
||
| # Example with document analysis | ||
| analysis_task = """ | ||
| Analyze the potential market opportunities for a new AI-powered | ||
| productivity tool targeting remote workers. | ||
| """ | ||
|
|
||
| response = agent.start(analysis_task) No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Apply the standard example skeleton
Same comments as the other providers: guard execution, print results, reuse Agent instance.
-from praisonaiagents import Agent
+from praisonaiagents import Agent
+
+
+def main() -> None:
+ agent = Agent(
+ instructions="You are a helpful assistant",
+ llm="perplexity/llama-3.1-8b-instruct",
+ )
+
+ print(agent.start("Hello! Can you help me with a research task?"))
+
+ research_task = (
+ "Give insights on the latest breakthroughs and trends in AI."
+ )
+ print(agent.start(research_task))
+
+ analysis_task = (
+ "Analyse market opportunities for an AI productivity tool aimed "
+ "at remote workers."
+ )
+ print(agent.start(analysis_task))
+
+
+if __name__ == "__main__":
+ main()Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In examples/python/providers/perplexity/perplexity_example.py from lines 1 to
30, the example lacks a main guard, does not print the responses, and creates
multiple Agent instances unnecessarily. Wrap the example code inside a main
guard (if __name__ == "__main__"), print the responses after each agent.start
call to show output, and reuse the single Agent instance for all calls instead
of creating new ones.
| """ | ||
| Basic example of using AWS Bedrock with PraisonAI | ||
| """ | ||
|
|
||
| from praisonaiagents import Agent | ||
|
|
||
| # Initialize Agent with AWS Bedrock (Claude) | ||
| agent = Agent( | ||
| instructions="You are a helpful assistant", | ||
| llm="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0", | ||
| ) | ||
|
|
||
| # Example conversation | ||
| response = agent.start("Hello! Can you help me with a writing task?") | ||
|
|
||
| # Example with creative writing | ||
| writing_task = """ | ||
| Write a short story about a time traveler who discovers | ||
| they can only travel to moments of great historical significance. | ||
| Make it engaging and about 200 words. | ||
| """ | ||
|
|
||
| response = agent.start(writing_task) | ||
|
|
||
| # Example with reasoning | ||
| reasoning_task = """ | ||
| Explain the concept of quantum entanglement in simple terms, | ||
| and then discuss its potential applications in quantum computing. | ||
| """ | ||
|
|
||
| response = agent.start(reasoning_task) No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Prevent auto-execution & surface outputs
Encapsulate demo logic and stream responses to stdout.
-from praisonaiagents import Agent
+from praisonaiagents import Agent
+
+
+def main() -> None:
+ agent = Agent(
+ instructions="You are a helpful assistant",
+ llm="bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0",
+ )
+
+ print(agent.start("Hello! Can you help me with a writing task?"))
+
+ writing_task = (
+ "Write a 200-word short story about a time traveller who can only "
+ "visit historically significant moments."
+ )
+ print(agent.start(writing_task))
+
+ reasoning_task = (
+ "Explain quantum entanglement in simple terms and its applications "
+ "in quantum computing."
+ )
+ print(agent.start(reasoning_task))
+
+
+if __name__ == "__main__":
+ main()Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In examples/python/providers/aws/aws_bedrock_example.py lines 1 to 31, the demo
code runs immediately when the script is imported or executed, and it does not
stream responses to stdout. To fix this, wrap the demo logic inside a main
function and add a conditional if __name__ == "__main__" guard to prevent
auto-execution on import. Additionally, modify the agent.start calls to stream
the responses to stdout as they are received, ensuring outputs are visible in
real-time.
| """ | ||
| Basic example of using Fireworks AI with PraisonAI | ||
| """ | ||
|
|
||
| from praisonaiagents import Agent | ||
|
|
||
| # Initialize Agent with Fireworks AI | ||
| agent = Agent( | ||
| instructions="You are a helpful assistant", | ||
| llm="fireworks/accounts/fireworks/models/llama-v3-8b-instruct", | ||
| ) | ||
|
|
||
| # Example conversation | ||
| response = agent.start("Hello! Can you help me with a coding task?") | ||
|
|
||
| # Example with code generation | ||
| coding_task = """ | ||
| Write a Python function that implements a binary search algorithm. | ||
| Include proper documentation and error handling. | ||
| """ | ||
|
|
||
| response = agent.start(coding_task) | ||
|
|
||
| # Example with creative writing | ||
| creative_task = """ | ||
| Write a short story about a time traveler who discovers | ||
| they can only travel to moments of great historical significance. | ||
| """ | ||
|
|
||
| response = agent.start(creative_task) No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consistent executable pattern
Align with the other examples: wrap in main(), print responses, avoid silent overwrites of response.
-from praisonaiagents import Agent
+from praisonaiagents import Agent
+
+
+def main() -> None:
+ agent = Agent(
+ instructions="You are a helpful assistant",
+ llm="fireworks/accounts/fireworks/models/llama-v3-8b-instruct",
+ )
+
+ print(agent.start("Hello! Can you help me with a coding task?"))
+
+ coding_task = (
+ "Write a Python binary-search function with docstring and "
+ "error handling."
+ )
+ print(agent.start(coding_task))
+
+ creative_task = (
+ "Write a short story about a time traveller who can only visit "
+ "moments of great historical significance."
+ )
+ print(agent.start(creative_task))
+
+
+if __name__ == "__main__":
+ main()Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In examples/python/providers/fireworks/fireworks_example.py lines 1 to 30, the
code lacks a main() function wrapper, does not print the responses, and
overwrites the response variable silently. Refactor the script by wrapping the
example usage inside a main() function, print each response after calling
agent.start(), and avoid reusing the same response variable without printing or
handling the previous output to maintain clarity and consistency with other
examples.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
examples/python/models/llama/llama_business_analyst_agent.py (1)
3-9: Well-structured agent configuration for business analysis.The agent configuration is comprehensive and domain-specific. The instructions clearly define the business analyst role and responsibilities, and the Llama model selection is appropriate for analytical tasks.
Consider adding error handling and additional configuration options for production use:
+try: agent = Agent( instructions="You are a business analyst AI agent. " "Help users analyze business performance, market trends, " "competitive landscape, and strategic opportunities. Provide insights " "on business metrics, financial analysis, and growth strategies.", - llm="meta-llama/Llama-3.1-8B-Instruct" + llm="meta-llama/Llama-3.1-8B-Instruct", + max_tokens=1000, + temperature=0.7 ) +except Exception as e: + print(f"Error creating agent: {e}") + raise
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
examples/python/models/claude/claude_creative_designer_agent.py(1 hunks)examples/python/models/claude/claude_educational_tutor_agent.py(1 hunks)examples/python/models/claude/claude_financial_advisor_agent.py(1 hunks)examples/python/models/claude/claude_legal_advisor_agent.py(1 hunks)examples/python/models/claude/claude_medical_researcher_agent.py(1 hunks)examples/python/models/gpt/gpt_cybersecurity_agent.py(1 hunks)examples/python/models/gpt/gpt_data_engineer_agent.py(1 hunks)examples/python/models/gpt/gpt_marketing_strategist_agent.py(1 hunks)examples/python/models/gpt/gpt_product_manager_agent.py(1 hunks)examples/python/models/gpt/gpt_software_architect_agent.py(1 hunks)examples/python/models/llama/llama_business_analyst_agent.py(1 hunks)examples/python/models/llama/llama_code_review_agent.py(1 hunks)examples/python/models/llama/llama_content_writer_agent.py(1 hunks)examples/python/models/llama/llama_data_science_agent.py(1 hunks)examples/python/models/llama/llama_research_assistant_agent.py(1 hunks)
✅ Files skipped from review due to trivial changes (14)
- examples/python/models/claude/claude_legal_advisor_agent.py
- examples/python/models/claude/claude_educational_tutor_agent.py
- examples/python/models/claude/claude_medical_researcher_agent.py
- examples/python/models/gpt/gpt_marketing_strategist_agent.py
- examples/python/models/llama/llama_research_assistant_agent.py
- examples/python/models/gpt/gpt_product_manager_agent.py
- examples/python/models/gpt/gpt_software_architect_agent.py
- examples/python/models/claude/claude_creative_designer_agent.py
- examples/python/models/llama/llama_content_writer_agent.py
- examples/python/models/claude/claude_financial_advisor_agent.py
- examples/python/models/gpt/gpt_data_engineer_agent.py
- examples/python/models/llama/llama_code_review_agent.py
- examples/python/models/gpt/gpt_cybersecurity_agent.py
- examples/python/models/llama/llama_data_science_agent.py
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the unified LLM wrapper in `praisonaiagents/llm/` for integrating with multiple LLM providers.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the `Agent` class from `praisonaiagents/agent/` for core agent implementations, supporting LLM integration, tool calling, and self-reflection.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/agents.ts : The 'PraisonAIAgents' class in 'src/agents/agents.ts' should manage multiple agents, tasks, memory, and process type, mirroring the Python 'agents.py'.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agent/agent.ts : The 'Agent' class in 'src/agent/agent.ts' should encapsulate a single agent's role, name, and methods for calling the LLM using 'aisdk'.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.windsurfrules:0-0
Timestamp: 2025-06-30T10:06:44.129Z
Learning: Applies to src/praisonai-ts/src/{llm,agent,agents,task}/**/*.ts : Use the 'aisdk' library for all large language model (LLM) calls in TypeScript, such as using 'generateText' for text generation.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/autoagents.ts : The 'AutoAgents' class in 'src/agents/autoagents.ts' should provide high-level convenience for automatically generating agent/task configuration from user instructions, using 'aisdk' to parse config.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use conda environment activation (`conda activate praisonai-agents`) before running development or tests.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Applies to src/praisonai-agents/tests/**/*.py : Test files should be placed in the `tests/` directory and demonstrate specific usage patterns, serving as both test and documentation.
examples/python/models/llama/llama_business_analyst_agent.py (6)
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agent/agent.ts : The 'Agent' class in 'src/agent/agent.ts' should encapsulate a single agent's role, name, and methods for calling the LLM using 'aisdk'.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the `Agent` class from `praisonaiagents/agent/` for core agent implementations, supporting LLM integration, tool calling, and self-reflection.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the unified LLM wrapper in `praisonaiagents/llm/` for integrating with multiple LLM providers.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/llm/llm.ts : The 'LLM' class in 'llm.ts' should wrap 'aisdk.generateText' calls for generating text responses.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/agents.ts : The 'PraisonAIAgents' class in 'src/agents/agents.ts' should manage multiple agents, tasks, memory, and process type, mirroring the Python 'agents.py'.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/autoagents.ts : The 'AutoAgents' class in 'src/agents/autoagents.ts' should provide high-level convenience for automatically generating agent/task configuration from user instructions, using 'aisdk' to parse config.
🔇 Additional comments (1)
examples/python/models/llama/llama_business_analyst_agent.py (1)
1-1: LGTM! Clean import following framework patterns.The import correctly uses the
Agentclass frompraisonaiagents, consistent with the established framework usage patterns.
| response = agent.start("Hello! I'm your business analyst assistant. " | ||
| "How can I help you analyze your business data and strategies today?") No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify the agent start API usage and handle the response.
The greeting message is appropriate and professional. However, ensure the start() method is the correct API and consider handling the response properly.
Verify the agent API usage and improve response handling:
Consider handling the response:
response = agent.start("Hello! I'm your business analyst assistant. "
- "How can I help you analyze your business data and strategies today?")
+ "How can I help you analyze your business data and strategies today?")
+
+if response:
+ print(response)🏁 Script executed:
#!/bin/bash
# Search for Agent class implementation and start method usage
ast-grep --pattern 'class Agent {
$$$
def start($$$) {
$$$
}
$$$
}'
# Search for other examples of agent.start() usage
rg -A 3 -B 1 "agent\.start\(" --type pyLength of output: 97212
🏁 Script executed:
#!/bin/bash
sed -n '1,50p' examples/python/models/llama/llama_business_analyst_agent.pyLength of output: 638
Add output of the agent’s response
The start() call is correct—agent.start(...) returns the assistant’s reply as a string. In this example you assign it to response but never surface it. Please update examples/python/models/llama/llama_business_analyst_agent.py to print the result:
response = agent.start("Hello! I'm your business analyst assistant. "
"How can I help you analyze your business data and strategies today?")
+print(response)This change will ensure the example actually displays the agent’s output.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| response = agent.start("Hello! I'm your business analyst assistant. " | |
| "How can I help you analyze your business data and strategies today?") | |
| response = agent.start("Hello! I'm your business analyst assistant. " | |
| "How can I help you analyze your business data and strategies today?") | |
| print(response) |
🤖 Prompt for AI Agents
In examples/python/models/llama/llama_business_analyst_agent.py around lines 11
to 12, the response from agent.start(...) is assigned to the variable response
but never printed or displayed. To fix this, add a print statement immediately
after the assignment to output the content of response so the agent's reply is
visible when running the example.
✅ Updated 8 existing provider examples to use new clean format
✅ Added 15 new provider examples for complete LiteLLM coverage
✅ Added 15 new model examples with 5 unique agents per model
✅ Updated 3 existing model examples to match new structure
✅ Standardized all examples to use praisonaiagents.Agent
New (15): Azure, AWS Bedrock, Together AI, Replicate, Perplexity, Fireworks, vLLM, Hugging Face, Vertex AI, SageMaker, OpenRouter, Cloudflare, Moonshot AI, Databricks, AI21
New Model Examples (15 files):
Llama Model (5 agents):
llama_code_review_agent.py - Code review and quality assurance
llama_data_science_agent.py - Data analysis and ML development
llama_content_writer_agent.py - Content creation and SEO
llama_business_analyst_agent.py - Business analysis and strategy
llama_research_assistant_agent.py - Research and academic support
Claude Model (5 agents):
claude_legal_advisor_agent.py - Legal concepts and compliance
claude_medical_researcher_agent.py - Medical research and concepts
claude_financial_advisor_agent.py - Financial planning and analysis
claude_educational_tutor_agent.py - Educational support and tutoring
claude_creative_designer_agent.py - Design and creative projects
GPT Model (5 agents):
gpt_software_architect_agent.py - Software architecture and design
gpt_product_manager_agent.py - Product strategy and management
gpt_cybersecurity_agent.py - Security concepts and best practices
gpt_marketing_strategist_agent.py - Marketing strategy and campaigns
gpt_data_engineer_agent.py - Data engineering and pipelines