-
-
Notifications
You must be signed in to change notification settings - Fork 743
Add AI Market & Startup Trend Agent Notebook #755
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add AI Market & Startup Trend Agent Notebook #755
Conversation
…up_trend_agent.ipynb
WalkthroughSeveral new Jupyter notebook examples have been added, each demonstrating an AI agent for a specific domain: university admissions counseling, market/startup trend analysis, data analysis, health and fitness, meme creation, and local document QA with RAG. Each notebook introduces custom tool classes, prompt configurations, and main application logic tailored to its use case. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Notebook
participant CustomTool(s)
participant Agent/LLM
User->>Notebook: Upload data / Enter query
Notebook->>CustomTool(s): Preprocess input (e.g., file, query)
CustomTool(s)-->>Notebook: Return processed data/results
Notebook->>Agent/LLM: Send prompt with context/tools
Agent/LLM-->>Notebook: Return generated response/analysis
Notebook-->>User: Display results, visualizations, or answers
Possibly related PRs
Suggested labels
Poem
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @Dhivya-Bharathy, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces a suite of six new AI agent notebooks, each designed to address a specific domain, from market analysis and data insights to personal health, educational counseling, creative content generation, and local document Q&A. These additions showcase various applications of AI agents, leveraging different tools and models to provide practical solutions.
Highlights
- New AI Agent Notebooks: This pull request introduces six new AI agent notebooks, significantly expanding the
examples/cookbooksdirectory with diverse AI capabilities. - AI Market & Startup Trend Agent: A new agent is added for analyzing current market and startup trends, leveraging real-time news, web search (DuckDuckGo), and article summarization (Newspaper3k) to provide actionable insights.
- AI Enrollment Counselor: A notebook demonstrating an AI agent designed to automate university admissions, capable of answering applicant questions, checking application completeness, and offering personalized guidance.
- AI Data Analysis Agent: An intelligent agent for comprehensive data analysis, supporting CSV/Excel file uploads, statistical analysis, automatic chart generation (Plotly), and natural language querying.
- AI Health & Fitness Agent: A personalized health and fitness agent that provides dietary and exercise recommendations, including BMI calculation, calorie needs, macronutrient breakdown, and tailored workout plans.
- AI Meme Creator Agent: An AI agent that generates memes using browser automation and multiple LLM models (OpenAI, Claude, Deepseek), featuring template search, caption generation, and quality validation.
- Local RAG Document Agent: A Retrieval-Augmented Generation (RAG) agent that operates locally, processing various document formats (PDF, TXT, MD, CSV), utilizing ChromaDB for vector storage, and Ollama for local LLM inference to answer document-based questions.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
PR Code Suggestions ✨Explore these optional code suggestions:
|
|||||||||||||||||||||
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #755 +/- ##
=======================================
Coverage 14.23% 14.23%
=======================================
Files 25 25
Lines 2571 2571
Branches 367 367
=======================================
Hits 366 366
Misses 2189 2189
Partials 16 16
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds a collection of 6 new example notebooks, demonstrating various AI agent capabilities. The notebooks are a great addition and showcase different use cases.
My review focuses on improving security, correctness, and maintainability. The main points are:
- Security: All notebooks include hardcoded placeholder API keys, which is a security risk. I've suggested using environment variables or
getpassinstead. - Correctness: A few notebooks have bugs that prevent them from running successfully. For example, an incorrect NLTK package name, a compatibility issue with a library, and an incomplete implementation of a core feature.
- Maintainability: I've pointed out a few places where code can be refactored for better clarity and where error handling can be more specific.
| "\n", | ||
| "# Download all necessary NLTK data for newspaper3k\n", | ||
| "import nltk\n", | ||
| "nltk.download('tokenizers/punkt')\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| " return f\"https://i.imgflip.com/{meme_id}.jpg\"\n", | ||
| " return None\n", | ||
| " except Exception as e:\n", | ||
| " print(f\"Error in meme generation: {str(e)}\")\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The notebook fails during meme generation with the error "ChatOpenAI" object has no field "ainvoke". This indicates an incompatibility between the browser-use library and the langchain LLM object you are passing. The notebook should be fixed to be runnable, as this error prevents its core functionality from working.
| "import os\n", | ||
| "os.environ[\"OPENAI_API_KEY\"] = \"sk-...\" # <-- Replace with your actual OpenAI API key" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hardcoding API keys, even as placeholders, is a security risk as it might lead to users accidentally committing their real keys. It's better to load keys from environment variables or use a secure input method like getpass. This also makes the notebook more portable and secure.
import os
from getpass import getpass
# For better security, load the API key from an environment variable or prompt for it.
# In Google Colab, you can use the "Secrets" tab to store your API key.
if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass("Enter your OpenAI API key: ")
| "# Setup Key\n", | ||
| "import os\n", | ||
| "\n", | ||
| "# Set your Anthropic API key\n", | ||
| "anthropic_key = \"your_anthropic_key_here\" # Get from https://console.anthropic.com\n", | ||
| "\n", | ||
| "# Set environment variable\n", | ||
| "os.environ[\"ANTHROPIC_API_KEY\"] = anthropic_key\n", | ||
| "\n", | ||
| "print(\"✅ Anthropic API key configured!\")" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hardcoding API keys, even as placeholders, is a security risk as it might lead to users accidentally committing their real keys. It's better to load keys from environment variables or use a secure input method like getpass. This also makes the notebook more portable and secure.
# Setup Key
import os
from getpass import getpass
# For better security, load the API key from an environment variable or prompt for it.
# In Google Colab, you can use the "Secrets" tab to store your API key.
if "ANTHROPIC_API_KEY" not in os.environ:
os.environ["ANTHROPIC_API_KEY"] = getpass("Enter your Anthropic API key: ")
print("✅ Anthropic API key configured!")
| "import os\n", | ||
| "openai_key = \"sk-..\"\n", | ||
| "\n", | ||
| "os.environ[\"OPENAI_API_KEY\"] = openai_key\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hardcoding API keys, even as placeholders, is a security risk. It's better to load keys from environment variables or use a secure input method like getpass.
import os
from getpass import getpass
openai_key = os.getenv("OPENAI_API_KEY")
if not openai_key:
openai_key = getpass("Enter your OpenAI API key: ")
os.environ["OPENAI_API_KEY"] = openai_key
| "import os\n", | ||
| "\n", | ||
| "# Set your API keys\n", | ||
| "openai_key = \"Enter you api key here\"\n", | ||
| "anthropic_key = \"Enter you api key here\" # Get from https://console.anthropic.com\n", | ||
| "deepseek_key = \"Enter you api key here\" # Get from https://platform.deepseek.com\n", | ||
| "\n", | ||
| "# Set environment variables\n", | ||
| "os.environ[\"OPENAI_API_KEY\"] = openai_key\n", | ||
| "os.environ[\"ANTHROPIC_API_KEY\"] = anthropic_key\n", | ||
| "os.environ[\"DEEPSEEK_API_KEY\"] = deepseek_key\n", | ||
| "\n", | ||
| "# Model selection\n", | ||
| "model_choice = \"OpenAI\" # Options: \"OpenAI\", \"Claude\", \"Deepseek\"\n", | ||
| "\n", | ||
| "print(\"✅ API keys configured!\")\n", | ||
| "print(f\"✅ Using model: {model_choice}\")" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hardcoding API keys, even as placeholders, is a security risk. It's better to load keys from environment variables or use a secure input method like getpass for each key.
import os
from getpass import getpass
# For better security, load API keys from environment variables or prompt for them.
# In Google Colab, you can use the "Secrets" tab to store your API keys.
if os.getenv("OPENAI_API_KEY") is None:
os.environ["OPENAI_API_KEY"] = getpass("Enter your OpenAI API key: ")
if os.getenv("ANTHROPIC_API_KEY") is None:
os.environ["ANTHROPIC_API_KEY"] = getpass("Enter your Anthropic API key: ")
if os.getenv("DEEPSEEK_API_KEY") is None:
os.environ["DEEPSEEK_API_KEY"] = getpass("Enter your Deepseek API key: ")
# Model selection
model_choice = "OpenAI" # Options: "OpenAI", "Claude", "Deepseek"
print("✅ API keys configured!")
print(f"✅ Using model: {model_choice}")
| "import os\n", | ||
| "openai_key = \"sk-..\"\n", | ||
| "\n", | ||
| "os.environ[\"OPENAI_API_KEY\"] = openai_key\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hardcoding API keys, even as placeholders, is a security risk. It's better to load keys from environment variables or use a secure input method like getpass.
import os
from getpass import getpass
openai_key = os.getenv("OPENAI_API_KEY")
if not openai_key:
openai_key = getpass("Enter your OpenAI API key: ")
os.environ["OPENAI_API_KEY"] = openai_key
| " # Here you would integrate with local LLM for answer generation\n", | ||
| " print(f\"\\n💡 AI Answer (using local LLM):\")\n", | ||
| " print(\"Based on the document content, here's what I found...\")\n", | ||
| " print(\"(This would be generated by the local LLM model)\")\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The notebook is missing the actual integration with a local LLM for generating answers. The code currently just prints a placeholder message. To fulfill the notebook's purpose of being a RAG agent, this part should be implemented.
For example, you could use the ollama library to call a local model with the retrieved context:
# Example using ollama
import ollama
context = "\n".join([res['documents'][0][i] for i in range(search_result['num_results'])])
llm_prompt = f"Based on the following context, answer the question.\nContext: {context}\nQuestion: {question}"
response = ollama.chat(model='llama3', messages=[{'role': 'user', 'content': llm_prompt}])
print(f"💡 AI Answer: {response['message']['content']}")| " \"summary\": a.summary,\n", | ||
| " \"url\": article[\"url\"]\n", | ||
| " })\n", | ||
| " except Exception as e:\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Catching a broad Exception can hide unrelated errors and make debugging harder. It's better to catch more specific exceptions that you expect from the newspaper3k library, like ArticleException. You would need to import it with from newspaper import ArticleException.
except Exception as e: # Consider using a more specific exception, e.g., from newspaper import ArticleException
| " if chart_type == 'bar':\n", | ||
| " fig = px.bar(df, x=x_column, y=y_column, title=title, color_discrete_sequence=['#1f77b4'])\n", | ||
| " elif chart_type == 'line':\n", | ||
| " fig = px.line(df, x=x_column, y=y_column, title=title, color_discrete_sequence=['#2ca02c'])\n", | ||
| " elif chart_type == 'scatter':\n", | ||
| " fig = px.scatter(df, x=x_column, y=y_column, title=title, color_discrete_sequence=['#ff7f0e'])\n", | ||
| " elif chart_type == 'histogram':\n", | ||
| " fig = px.histogram(df, x=x_column, title=title, color_discrete_sequence=['#d62728'])\n", | ||
| " elif chart_type == 'box':\n", | ||
| " fig = px.box(df, x=x_column, y=y_column, title=title, color_discrete_sequence=['#9467bd'])\n", | ||
| " elif chart_type == 'pie':\n", | ||
| " fig = px.pie(df, values=y_column, names=x_column, title=title)\n", | ||
| " elif chart_type == 'heatmap':\n", | ||
| " corr_matrix = df.corr()\n", | ||
| " fig = px.imshow(corr_matrix, title=title, color_continuous_scale='RdBu')\n", | ||
| " elif chart_type == 'area':\n", | ||
| " fig = px.area(df, x=x_column, y=y_column, title=title, color_discrete_sequence=['#8c564b'])\n", | ||
| " else:\n", | ||
| " return \"Unsupported chart type\"\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This long if/elif/else chain can be refactored into a dictionary lookup to make it more concise and easier to extend with new chart types. This improves maintainability.
For example:
chart_map = {
'bar': (px.bar, {'color_discrete_sequence': ['#1f77b4']}),
'line': (px.line, {'color_discrete_sequence': ['#2ca02c']}),
# ... and so on
}
if chart_type in chart_map:
plot_func, kwargs = chart_map[chart_type]
# ... handle args and call plot_func
else:
return "Unsupported chart type"There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 17
🔭 Outside diff range comments (1)
examples/cookbooks/AI_Enrollment_Counselor.ipynb (1)
445-1050: Fix critical merge conflict in the notebook structure.The file contains duplicate/conflicting content starting at line 445, which appears to be a Git merge conflict that wasn't properly resolved. This makes the notebook invalid JSON and unusable.
The duplicate content from lines 445-1050 should be completely removed. The correct notebook structure should end at line 444 with the closing brace and newline. Here's what should be done:
- }, - "nbformat": 4, - "nbformat_minor": 0 -} -======= - - "nbformat": 4, - "nbformat_minor": 0, - "metadata": { - [... all the duplicate content ...] - } -}Remove all content from line 445 onwards, as the notebook should properly end at line 444.
🧹 Nitpick comments (9)
examples/cookbooks/ai_market_startup_trend_agent.ipynb (2)
49-49: Consider updating the deprecatednewspaper3kpackageThe
newspaper3kpackage hasn't been maintained since 2020. Consider usingnewspaper4kwhich is an actively maintained fork with better Python 3 support and bug fixes.-!pip install praisonai streamlit duckduckgo-search "newspaper3k[lxml]" anthropic lxml_html_clean +!pip install praisonai streamlit duckduckgo-search "newspaper4k[lxml]" anthropic
156-164: Implement actual trend extraction using LLMThe
TrendInsightToolcurrently just concatenates summaries without performing actual trend analysis. The TODO comment indicates this is incomplete.Would you like me to help implement proper trend extraction using the Anthropic API to analyze the summaries and identify patterns, opportunities, and risks?
examples/cookbooks/local_rag_document_qa_agent.ipynb (2)
209-213: Make ChromaDB path configurable and handle cleanupThe ChromaDB path is hardcoded which may cause permission issues in some environments.
class VectorDatabaseTool: - def __init__(self, collection_name: str = "document_qa"): + def __init__(self, collection_name: str = "document_qa", db_path: str = None): self.collection_name = collection_name + if db_path is None: + db_path = os.path.join(tempfile.gettempdir(), "chroma_db") # Use new ChromaDB client configuration - self.client = chromadb.PersistentClient(path="./chroma_db") + self.client = chromadb.PersistentClient(path=db_path) self.collection = self.client.get_or_create_collection(name=collection_name)
842-845: Complete the LLM integration for answer generationThe Q&A loop retrieves relevant chunks but doesn't generate actual answers using the local LLM.
Would you like me to help implement the integration with Ollama to generate contextual answers based on the retrieved chunks? This would complete the RAG pipeline.
examples/cookbooks/ai_meme_creator_agent.ipynb (1)
226-247: Add query validation for caption generationThe caption generation directly uses the query in formatted strings without length validation, which could create overly long captions.
def _generate_setup_punchline_captions(self, query: str) -> List[Dict[str, str]]: """Generate setup-punchline style captions""" captions = [] + # Truncate query if too long for meme text + if len(query) > 100: + query = query[:97] + "..." + # Extract key elements from query words = query.split()examples/cookbooks/ai_data_analysis_agent.ipynb (2)
176-178: Improve date column detection logicThe current date detection only checks if 'date' is in the column name (case-insensitive). Consider using pandas' built-in date detection or checking for common date patterns.
for col in df.columns: if 'date' in col.lower(): df[col] = pd.to_datetime(df[col], errors='coerce') + # Also try to infer datetime for object columns + elif df[col].dtype == 'object' and not pd.api.types.is_numeric_dtype(df[col]): + try: + # Check if the column contains date-like strings + sample = df[col].dropna().head(10) + if sample.astype(str).str.match(r'\d{4}-\d{2}-\d{2}|\d{2}/\d{2}/\d{4}').any(): + df[col] = pd.to_datetime(df[col], errors='coerce') + except: + pass
834-1014: Well-structured main application with comprehensive features!The implementation provides a complete data analysis workflow with file upload, preprocessing, analysis, and visualization. Consider adding a try-except wrapper around the entire main section to handle unexpected errors gracefully.
You might want to wrap the main execution in a try-except block:
try: # Main Application code... except Exception as e: print(f"❌ An unexpected error occurred: {str(e)}") print("Please check your input data and try again.")examples/cookbooks/ai_health_fitness_agent.ipynb (1)
1-1021: Consider extracting common patterns into shared utilitiesBoth notebooks share similar patterns (tool classes, YAML prompts, error handling). Consider creating a shared module for common functionality to improve maintainability and reduce code duplication across the cookbook examples.
For example:
- Base tool class with common error handling
- Shared input validation utilities
- Common visualization helpers
- Standardized YAML prompt structure
examples/cookbooks/AI_Enrollment_Counselor.ipynb (1)
400-417: Good implementation with potential for enhancement.The
ask_enrollment_agenthelper function is well-designed and handles both document validation and general queries effectively.Consider adding input validation and error handling:
def ask_enrollment_agent(query, submitted=None, required=None): + if not query or not isinstance(query, str): + raise ValueError("Query must be a non-empty string") + if submitted and required: + if not isinstance(submitted, list) or not isinstance(required, list): + raise TypeError("Document lists must be of type list") prompt = ( f"Applicant submitted documents: {submitted}\n" f"Required documents: {required}\n" f"{query}\n" "List any missing documents and provide guidance." ) - return enrollment_agent.start(prompt) + try: + return enrollment_agent.start(prompt) + except Exception as e: + return f"Error processing request: {str(e)}" else: - return enrollment_agent.start(query) + try: + return enrollment_agent.start(query) + except Exception as e: + return f"Error processing request: {str(e)}"
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
examples/cookbooks/AI_Enrollment_Counselor.ipynb(2 hunks)examples/cookbooks/Ai_Market_Startup_Trend_Agent.ipynb(1 hunks)examples/cookbooks/ai_data_analysis_agent.ipynb(1 hunks)examples/cookbooks/ai_health_fitness_agent.ipynb(1 hunks)examples/cookbooks/ai_market_startup_trend_agent.ipynb(1 hunks)examples/cookbooks/ai_meme_creator_agent.ipynb(1 hunks)examples/cookbooks/local_rag_document_qa_agent.ipynb(1 hunks)
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/agents.ts : The 'PraisonAIAgents' class in 'src/agents/agents.ts' should manage multiple agents, tasks, memory, and process type, mirroring the Python 'agents.py'.
examples/cookbooks/ai_market_startup_trend_agent.ipynb (1)
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Use the `Agent` class from `praisonaiagents/agent/` for core agent implementations, supporting LLM integration, tool calling, and self-reflection.
examples/cookbooks/ai_meme_creator_agent.ipynb (1)
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/llm/llm.ts : The 'LLM' class in 'llm.ts' should wrap 'aisdk.generateText' calls for generating text responses.
examples/cookbooks/local_rag_document_qa_agent.ipynb (2)
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/knowledge/knowledge.ts : The 'Knowledge' class in 'src/knowledge/knowledge.ts' should handle knowledge storage, chunking, and retrieval, including vector store and RAG functionality.
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Applies to src/praisonai-agents/praisonaiagents/{memory,knowledge}/**/*.py : Place memory-related implementations in `praisonaiagents/memory/` and knowledge/document processing in `praisonaiagents/knowledge/`.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: quick-test
- GitHub Check: test-core (3.11)
🔇 Additional comments (4)
examples/cookbooks/local_rag_document_qa_agent.ipynb (1)
275-301: Review text chunking edge casesThe chunking logic looks correct but verify that all sentences are included, especially when a sentence exactly fits the remaining chunk size.
The implementation handles overlapping chunks well and includes the final chunk. Good error handling with try-except.
examples/cookbooks/ai_meme_creator_agent.ipynb (1)
726-728: Handle asyncio properly in Google Colab environmentThe code uses
awaitdirectly which may not work in all Colab environments.# Check if we're in an async context and handle accordingly import nest_asyncio nest_asyncio.apply() # Then run the async function import asyncio meme_url = asyncio.run(generate_meme(query, model_choice, api_key))Note: You may need to install
nest_asynciofor proper async handling in Jupyter/Colab.examples/cookbooks/ai_health_fitness_agent.ipynb (1)
117-224: Excellent BMI calculator implementation!The tool provides comprehensive BMI analysis with accurate health categories, personalized recommendations, and health risk assessment. The error handling and structured response format are well-designed.
examples/cookbooks/AI_Enrollment_Counselor.ipynb (1)
108-118: LGTM! Clear and well-structured agent configuration.The role, goal, and instructions are well-defined and appropriately scoped for an enrollment counselor agent.
| " print(\"\\n📰 Article Summaries:\")\n", | ||
| " for i, s in enumerate(summaries, 1):\n", | ||
| " print(f\"{i}. {s['title']}\")\n", | ||
| " print(f\" {s['summary']}\")\n", | ||
| " print(f\" [Read more]({s['url']})\\n\")\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add validation for empty summaries
The code displays summaries without checking if they exist or have content, leading to empty output.
print("\n📰 Article Summaries:")
for i, s in enumerate(summaries, 1):
print(f"{i}. {s['title']}")
- print(f" {s['summary']}")
+ summary_text = s.get('summary', '').strip()
+ if summary_text and not summary_text.startswith('Error'):
+ print(f" {summary_text}")
+ else:
+ print(f" [Summary not available]")
print(f" [Read more]({s['url']})\n")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| " print(\"\\n📰 Article Summaries:\")\n", | |
| " for i, s in enumerate(summaries, 1):\n", | |
| " print(f\"{i}. {s['title']}\")\n", | |
| " print(f\" {s['summary']}\")\n", | |
| " print(f\" [Read more]({s['url']})\\n\")\n", | |
| print("\n📰 Article Summaries:") | |
| for i, s in enumerate(summaries, 1): | |
| print(f"{i}. {s['title']}") | |
| summary_text = s.get('summary', '').strip() | |
| if summary_text and not summary_text.startswith('Error'): | |
| print(f" {summary_text}") | |
| else: | |
| print(f" [Summary not available]") | |
| print(f" [Read more]({s['url']})\n") |
🤖 Prompt for AI Agents
In examples/cookbooks/ai_market_startup_trend_agent.ipynb around lines 355 to
359, the code prints article summaries without verifying if the summaries list
is empty or contains valid content. Add a validation step before the loop to
check if summaries exist and are not empty; if they are empty, print a message
indicating no summaries are available instead of proceeding to print empty
output.
| "\n", | ||
| "temperature: 0.4\n", | ||
| "max_tokens: 3500\n", | ||
| "model: \"claude-3-5-sonnet\"\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify the Claude model name format
The model name "claude-3-5-sonnet" may not be in the correct format for the Anthropic API.
🌐 Web query:
What is the correct model name format for Claude 3.5 Sonnet in the Anthropic API?
💡 Result:
In the Anthropic API, the model name for Claude 3.5 Sonnet is claude-3-5-sonnet-20240620. This identifier specifies the model's version and release date, ensuring consistent behavior across different environments. (docs.anthropic.com)
For convenience during development and testing, Anthropic offers an alias claude-3-5-sonnet-latest, which automatically points to the most recent snapshot of the Claude 3.5 Sonnet model. While aliases are useful for experimentation, it's recommended to use specific model versions (e.g., claude-3-5-sonnet-20240620) in production applications to ensure consistent performance. (docs.anthropic.com)
Additionally, when using the Anthropic API, it's important to include the model version in the model name to guarantee consistent behavior. For example, to use Claude 3.5 Sonnet in the API, you would specify the model name as claude-3-5-sonnet@20240620. (cloud.google.com)
Citations:
- 1: https://docs.anthropic.com/en/docs/about-claude/models?utm_source=openai
- 2: https://docs.anthropic.com/en/docs/about-claude/models?utm_source=openai
- 3: https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/use-claude?utm_source=openai
Specify the full Claude 3.5 Sonnet model identifier
The current line hard-codes claude-3-5-sonnet, which will cause API errors. You need to include the version suffix or use the alias. In your notebook:
• File: examples/cookbooks/ai_market_startup_trend_agent.ipynb
Line 226: replace
model: "claude-3-5-sonnet"with either the fixed version
model: "claude-3-5-sonnet-20240620"(recommended for production) or the development alias
model: "claude-3-5-sonnet-latest"Using a specific version ensures consistent behavior across environments.
🤖 Prompt for AI Agents
In examples/cookbooks/ai_market_startup_trend_agent.ipynb at line 226, the model
identifier is hard-coded as "claude-3-5-sonnet", which will cause API errors.
Update this line to specify the full model identifier by replacing it with
either "claude-3-5-sonnet-20240620" for a fixed production version or
"claude-3-5-sonnet-latest" for the development alias to ensure consistent
behavior.
| "nltk.download('tokenizers/punkt')\n", | ||
| "nltk.download('averaged_perceptron_tagger', quiet=True)\n", | ||
| "nltk.download('maxent_ne_chunker', quiet=True)\n", | ||
| "nltk.download('words', quiet=True)\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix NLTK data download paths
The NLTK download path 'tokenizers/punkt' is incorrect, causing download errors. Use the correct package names.
-nltk.download('tokenizers/punkt')
+nltk.download('punkt')
nltk.download('averaged_perceptron_tagger', quiet=True)
nltk.download('maxent_ne_chunker', quiet=True)
nltk.download('words', quiet=True)Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In examples/cookbooks/ai_market_startup_trend_agent.ipynb around lines 333 to
336, the NLTK download path 'tokenizers/punkt' is incorrect and causes errors.
Replace 'tokenizers/punkt' with the correct package name 'punkt' in the
nltk.download call to fix the download issue.
| "anthropic_key = \"your_anthropic_key_here\" # Get from https://console.anthropic.com\n", | ||
| "\n", | ||
| "# Set environment variable\n", | ||
| "os.environ[\"ANTHROPIC_API_KEY\"] = anthropic_key\n", | ||
| "\n", | ||
| "print(\"✅ Anthropic API key configured!\")" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add API key validation to prevent runtime errors
The code sets a placeholder API key without validation. This will cause runtime errors when making API calls.
# Set your Anthropic API key
anthropic_key = "your_anthropic_key_here" # Get from https://console.anthropic.com
+# Validate API key
+if anthropic_key == "your_anthropic_key_here" or not anthropic_key:
+ raise ValueError("Please set a valid Anthropic API key")
+
# Set environment variable
os.environ["ANTHROPIC_API_KEY"] = anthropic_key🤖 Prompt for AI Agents
In examples/cookbooks/ai_market_startup_trend_agent.ipynb around lines 85 to 90,
the code sets a placeholder Anthropic API key without validating it, which can
lead to runtime errors during API calls. Add a validation step after setting the
environment variable to check if the API key is not the placeholder or empty,
and raise an informative error or prompt the user to provide a valid key before
proceeding.
| { | ||
| "cells": [ | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": { | ||
| "id": "Fi_y0ooAzjjy" | ||
| }, | ||
| "source": [ | ||
| "# AI Market & Startup Trend Agent" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": { | ||
| "id": "M18iBcPYzl9d" | ||
| }, | ||
| "source": [ | ||
| "* An AI-powered agent that analyzes current market and startup trends using real-time news, web search, and multi-agent collaboration.\n", | ||
| "* The agent collects recent articles, summarizes key insights, and identifies emerging opportunities for entrepreneurs and investors.\n", | ||
| "* Features include automated news gathering, trend summarization, and actionable reports on startup opportunities in any area of interest." | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": { | ||
| "id": "SfsvzoD_3JtE" | ||
| }, | ||
| "source": [ | ||
| "[](https://colab.research.google.com/github/Dhivya-Bharathy/PraisonAI/blob/main/examples/cookbooks/ai_market_startup_trend_agent.ipynb)\n" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": { | ||
| "id": "6par7OlW0KVF" | ||
| }, | ||
| "source": [ | ||
| "# Dependencies" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": null, | ||
| "metadata": { | ||
| "id": "oJA89ujM0OSd" | ||
| }, | ||
| "outputs": [], | ||
| "source": [ | ||
| "!pip install praisonai streamlit duckduckgo-search \"newspaper3k[lxml]\" anthropic lxml_html_clean" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": { | ||
| "id": "bK4r7sQ_0hz0" | ||
| }, | ||
| "source": [ | ||
| "# Setup Key" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": 5, | ||
| "metadata": { | ||
| "colab": { | ||
| "base_uri": "https://localhost:8080/" | ||
| }, | ||
| "id": "9UViB-oJ0izp", | ||
| "outputId": "f1db522f-5e30-4f14-8fc1-5c5b814a10c7" | ||
| }, | ||
| "outputs": [ | ||
| { | ||
| "name": "stdout", | ||
| "output_type": "stream", | ||
| "text": [ | ||
| "✅ Anthropic API key configured!\n" | ||
| ] | ||
| } | ||
| ], | ||
| "source": [ | ||
| "# Setup Key\n", | ||
| "import os\n", | ||
| "\n", | ||
| "# Set your Anthropic API key\n", | ||
| "anthropic_key = \"your_anthropic_key_here\" # Get from https://console.anthropic.com\n", | ||
| "\n", | ||
| "# Set environment variable\n", | ||
| "os.environ[\"ANTHROPIC_API_KEY\"] = anthropic_key\n", | ||
| "\n", | ||
| "print(\"✅ Anthropic API key configured!\")" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": { | ||
| "id": "TX6AKmiE0sK3" | ||
| }, | ||
| "source": [ | ||
| "# Tools" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": 6, | ||
| "metadata": { | ||
| "id": "afU4kjcz0tCk" | ||
| }, | ||
| "outputs": [], | ||
| "source": [ | ||
| "# Custom News & Trend Analysis Tools\n", | ||
| "\n", | ||
| "from duckduckgo_search import DDGS\n", | ||
| "from newspaper import Article\n", | ||
| "from typing import List, Dict, Any\n", | ||
| "\n", | ||
| "class NewsSearchTool:\n", | ||
| " def __init__(self, max_results: int = 5):\n", | ||
| " self.max_results = max_results\n", | ||
| "\n", | ||
| " def search_news(self, topic: str) -> List[Dict[str, Any]]:\n", | ||
| " \"\"\"Search for recent news articles on a topic using DuckDuckGo.\"\"\"\n", | ||
| " results = []\n", | ||
| " with DDGS() as ddgs:\n", | ||
| " for r in ddgs.news(topic, max_results=self.max_results):\n", | ||
| " results.append({\n", | ||
| " \"title\": r.get(\"title\"),\n", | ||
| " \"url\": r.get(\"url\"),\n", | ||
| " \"date\": r.get(\"date\"),\n", | ||
| " \"body\": r.get(\"body\")\n", | ||
| " })\n", | ||
| " return results\n", | ||
| "\n", | ||
| "class ArticleSummaryTool:\n", | ||
| " def summarize_articles(self, articles: List[Dict[str, Any]]) -> List[Dict[str, Any]]:\n", | ||
| " \"\"\"Summarize the content of news articles using Newspaper3k.\"\"\"\n", | ||
| " summaries = []\n", | ||
| " for article in articles:\n", | ||
| " try:\n", | ||
| " a = Article(article[\"url\"])\n", | ||
| " a.download()\n", | ||
| " a.parse()\n", | ||
| " a.nlp()\n", | ||
| " summaries.append({\n", | ||
| " \"title\": article[\"title\"],\n", | ||
| " \"summary\": a.summary,\n", | ||
| " \"url\": article[\"url\"]\n", | ||
| " })\n", | ||
| " except Exception as e:\n", | ||
| " summaries.append({\n", | ||
| " \"title\": article[\"title\"],\n", | ||
| " \"summary\": f\"Error summarizing article: {str(e)}\",\n", | ||
| " \"url\": article[\"url\"]\n", | ||
| " })\n", | ||
| " return summaries\n", | ||
| "\n", | ||
| "class TrendInsightTool:\n", | ||
| " def extract_trends(self, summaries: List[Dict[str, Any]]) -> Dict[str, Any]:\n", | ||
| " \"\"\"Extract and aggregate trend insights from article summaries.\"\"\"\n", | ||
| " all_text = \" \".join([s[\"summary\"] for s in summaries if \"summary\" in s])\n", | ||
| " # For demo: just return the combined summaries\n", | ||
| " # In production: use LLM to extract trends and opportunities\n", | ||
| " return {\n", | ||
| " \"trend_report\": all_text[:2000] + (\"...\" if len(all_text) > 2000 else \"\")\n", | ||
| " }" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": { | ||
| "id": "CVCoaGTO0-Ue" | ||
| }, | ||
| "source": [ | ||
| "# YAML Prompt" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": 7, | ||
| "metadata": { | ||
| "colab": { | ||
| "base_uri": "https://localhost:8080/" | ||
| }, | ||
| "id": "6xu0NeHy0_ZV", | ||
| "outputId": "2d85f20a-8af4-49a4-ec76-9db4102e0d51" | ||
| }, | ||
| "outputs": [ | ||
| { | ||
| "name": "stdout", | ||
| "output_type": "stream", | ||
| "text": [ | ||
| "✅ YAML Prompt configured!\n" | ||
| ] | ||
| } | ||
| ], | ||
| "source": [ | ||
| "# YAML Prompt\n", | ||
| "yaml_prompt = \"\"\"\n", | ||
| "name: \"AI Market & Startup Trend Agent\"\n", | ||
| "description: \"Expert market analyst that gathers, summarizes, and analyzes startup and market trends from real-time news sources\"\n", | ||
| "instructions:\n", | ||
| " - \"You are an expert market and startup trend analyst\"\n", | ||
| " - \"Search for the latest news and articles on the user's topic of interest\"\n", | ||
| " - \"Summarize the key points and insights from each article\"\n", | ||
| " - \"Aggregate the summaries to identify emerging trends and startup opportunities\"\n", | ||
| " - \"Present findings in a clear, actionable report for entrepreneurs and investors\"\n", | ||
| " - \"Cite sources and provide links to original articles\"\n", | ||
| " - \"Highlight any patterns, risks, or opportunities you discover\"\n", | ||
| " - \"Use bullet points and markdown formatting for clarity\"\n", | ||
| "\n", | ||
| "tools:\n", | ||
| " - name: \"NewsSearchTool\"\n", | ||
| " description: \"Searches for recent news articles on a given topic using DuckDuckGo\"\n", | ||
| " - name: \"ArticleSummaryTool\"\n", | ||
| " description: \"Summarizes the content of news articles using Newspaper3k\"\n", | ||
| " - name: \"TrendInsightTool\"\n", | ||
| " description: \"Extracts and aggregates trend insights from article summaries\"\n", | ||
| "\n", | ||
| "output_format:\n", | ||
| " - \"Provide a trend analysis report with actionable insights\"\n", | ||
| " - \"Include a list of summarized articles with links\"\n", | ||
| " - \"Highlight key opportunities and risks\"\n", | ||
| " - \"Use clear, structured formatting with sections for news, summaries, and trends\"\n", | ||
| "\n", | ||
| "temperature: 0.4\n", | ||
| "max_tokens: 3500\n", | ||
| "model: \"claude-3-5-sonnet\"\n", | ||
| "\"\"\"\n", | ||
| "\n", | ||
| "print(\"✅ YAML Prompt configured!\")" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "markdown", | ||
| "metadata": { | ||
| "id": "MaX_eyRB1jtF" | ||
| }, | ||
| "source": [ | ||
| "# Main" | ||
| ] | ||
| }, | ||
| { | ||
| "cell_type": "code", | ||
| "execution_count": 13, | ||
| "metadata": { | ||
| "colab": { | ||
| "base_uri": "https://localhost:8080/" | ||
| }, | ||
| "collapsed": true, | ||
| "id": "fmMkG5hR1lS7", | ||
| "outputId": "0833615c-c919-406e-9646-9cd8d74f78ed" | ||
| }, | ||
| "outputs": [ | ||
| { | ||
| "name": "stderr", | ||
| "output_type": "stream", | ||
| "text": [ | ||
| "[nltk_data] Error loading tokenizers/punkt: Package 'tokenizers/punkt'\n", | ||
| "[nltk_data] not found in index\n" | ||
| ] | ||
| }, | ||
| { | ||
| "name": "stdout", | ||
| "output_type": "stream", | ||
| "text": [ | ||
| "📈 AI Market & Startup Trend Agent\n", | ||
| "Analyze current market and startup trends using real-time news and AI summarization!\n", | ||
| "\n", | ||
| "Enter the area of interest for your Startup or Market Trend Analysis: Chennai\n", | ||
| "\n", | ||
| "🔍 Searching for recent news on: Chennai\n" | ||
| ] | ||
| }, | ||
| { | ||
| "name": "stderr", | ||
| "output_type": "stream", | ||
| "text": [ | ||
| "/tmp/ipython-input-6-1293765785.py:14: RuntimeWarning: This package (`duckduckgo_search`) has been renamed to `ddgs`! Use `pip install ddgs` instead.\n", | ||
| " with DDGS() as ddgs:\n" | ||
| ] | ||
| }, | ||
| { | ||
| "name": "stdout", | ||
| "output_type": "stream", | ||
| "text": [ | ||
| "✅ Found 5 articles. Summarizing...\n", | ||
| "\n", | ||
| "📰 Article Summaries:\n", | ||
| "1. Chennai woman breaks down in front of traffic cop, shares the reason in viral post\n", | ||
| " \n", | ||
| " [Read more](https://www.msn.com/en-in/news/other/chennai-woman-breaks-down-in-front-of-traffic-cop-shares-the-reason-in-viral-post/ar-AA1IalYh)\n", | ||
| "\n", | ||
| "2. Chennai founder breaks down in front of traffic police after his unexpected question: 'And that's when the tears came'\n", | ||
| " \n", | ||
| " [Read more](https://www.msn.com/en-in/news/India/chennai-founder-breaks-down-in-front-of-traffic-police-after-his-unexpected-question-and-that-s-when-the-tears-came/ar-AA1Ia579)\n", | ||
| "\n", | ||
| "3. Explore India's coastal flavours at this Chennai food fest\n", | ||
| " \n", | ||
| " [Read more](https://www.msn.com/en-in/foodanddrink/other/explore-indias-coastal-flavours-at-this-chennai-food-fest/ar-AA1Iax3e)\n", | ||
| "\n", | ||
| "4. Chennai weather update: Expect patchy rain and a warm summer day\n", | ||
| " \n", | ||
| " [Read more](https://www.msn.com/en-in/news/india/chennai-weather-update-expect-patchy-rain-and-a-warm-summer-day/ar-AA1I9ECV)\n", | ||
| "\n", | ||
| "5. Chennai Power Outage Alert: Check Areas That Will Face Supply Disruption On July 8 For Maintenance Work\n", | ||
| " \n", | ||
| " [Read more](https://www.msn.com/en-in/autos/photos/chennai-power-outage-alert-check-areas-that-will-face-supply-disruption-on-july-8-for-maintenance-work/ar-AA1I9KIg)\n", | ||
| "\n", | ||
| "📊 Analyzing trends and opportunities...\n", | ||
| "\n", | ||
| "=== Trend Analysis Report ===\n", | ||
| " \n", | ||
| "\n", | ||
| "🧪 Sample Topics for Testing\n", | ||
| "1. AI in healthcare\n", | ||
| "2. Sustainable energy startups\n", | ||
| "3. Fintech innovation\n", | ||
| "4. Remote work technology\n", | ||
| "5. Climate tech investments\n", | ||
| "\n", | ||
| "==================================================\n", | ||
| "📈 Powered by AI Market & Startup Trend Agent | Built with PraisonAI\n" | ||
| ] | ||
| } | ||
| ], | ||
| "source": [ | ||
| "# Main Application (Google Colab Version)\n", | ||
| "import os\n", | ||
| "import warnings\n", | ||
| "warnings.filterwarnings(\"ignore\", category=ImportWarning)\n", | ||
| "\n", | ||
| "# Download all necessary NLTK data for newspaper3k\n", | ||
| "import nltk\n", | ||
| "nltk.download('tokenizers/punkt')\n", | ||
| "nltk.download('averaged_perceptron_tagger', quiet=True)\n", | ||
| "nltk.download('maxent_ne_chunker', quiet=True)\n", | ||
| "nltk.download('words', quiet=True)\n", | ||
| "\n", | ||
| "# Initialize tools\n", | ||
| "news_tool = NewsSearchTool()\n", | ||
| "summary_tool = ArticleSummaryTool()\n", | ||
| "trend_tool = TrendInsightTool()\n", | ||
| "\n", | ||
| "print(\"📈 AI Market & Startup Trend Agent\")\n", | ||
| "print(\"Analyze current market and startup trends using real-time news and AI summarization!\")\n", | ||
| "\n", | ||
| "# User input\n", | ||
| "topic = input(\"\\nEnter the area of interest for your Startup or Market Trend Analysis: \").strip()\n", | ||
| "\n", | ||
| "if topic:\n", | ||
| " print(f\"\\n🔍 Searching for recent news on: {topic}\")\n", | ||
| " articles = news_tool.search_news(topic)\n", | ||
| " if articles:\n", | ||
| " print(f\"✅ Found {len(articles)} articles. Summarizing...\")\n", | ||
| " summaries = summary_tool.summarize_articles(articles)\n", | ||
| " print(\"\\n📰 Article Summaries:\")\n", | ||
| " for i, s in enumerate(summaries, 1):\n", | ||
| " print(f\"{i}. {s['title']}\")\n", | ||
| " print(f\" {s['summary']}\")\n", | ||
| " print(f\" [Read more]({s['url']})\\n\")\n", | ||
| "\n", | ||
| " print(\"📊 Analyzing trends and opportunities...\")\n", | ||
| " trend_report = trend_tool.extract_trends(summaries)\n", | ||
| " print(\"\\n=== Trend Analysis Report ===\")\n", | ||
| " print(trend_report[\"trend_report\"])\n", | ||
| " else:\n", | ||
| " print(\"❌ No news articles found for this topic.\")\n", | ||
| "else:\n", | ||
| " print(\"❌ No topic entered. Please provide a topic to analyze.\")\n", | ||
| "\n", | ||
| "# Sample topics for testing\n", | ||
| "print(\"\\n🧪 Sample Topics for Testing\")\n", | ||
| "sample_topics = [\n", | ||
| " \"AI in healthcare\",\n", | ||
| " \"Sustainable energy startups\",\n", | ||
| " \"Fintech innovation\",\n", | ||
| " \"Remote work technology\",\n", | ||
| " \"Climate tech investments\"\n", | ||
| "]\n", | ||
| "for i, t in enumerate(sample_topics, 1):\n", | ||
| " print(f\"{i}. {t}\")\n", | ||
| "\n", | ||
| "# Footer\n", | ||
| "print(\"\\n\" + \"=\"*50)\n", | ||
| "print(\"📈 Powered by AI Market & Startup Trend Agent | Built with PraisonAI\")" | ||
| ] | ||
| } | ||
| ], | ||
| "metadata": { | ||
| "colab": { | ||
| "provenance": [] | ||
| }, | ||
| "kernelspec": { | ||
| "display_name": "Python 3", | ||
| "name": "python3" | ||
| }, | ||
| "language_info": { | ||
| "name": "python" | ||
| } | ||
| }, | ||
| "nbformat": 4, | ||
| "nbformat_minor": 0 | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove duplicate file
This file is identical to ai_market_startup_trend_agent.ipynb with only filename capitalization differences. Having duplicate files can cause confusion and maintenance issues.
Consider removing this duplicate file and keeping only one version with consistent naming convention (preferably lowercase as per Python conventions).
🤖 Prompt for AI Agents
In examples/cookbooks/Ai_Market_Startup_Trend_Agent.ipynb lines 1 to 402, this
file is a duplicate of ai_market_startup_trend_agent.ipynb differing only in
filename capitalization. To avoid confusion and maintenance overhead, remove
this duplicate file entirely and keep only the lowercase named version
ai_market_startup_trend_agent.ipynb as per Python naming conventions.
| " def get_exercise_plan(self, goal: str, fitness_level: str,\n", | ||
| " available_time: int, equipment: List[str]) -> Dict[str, Any]:\n", | ||
| " \"\"\"Generate personalized exercise plan\"\"\"\n", | ||
| " try:\n", | ||
| " # Determine exercise focus based on goal\n", | ||
| " if goal == \"Lose Weight\":\n", | ||
| " focus = [\"cardio\", \"strength\"]\n", | ||
| " cardio_ratio = 0.6\n", | ||
| " elif goal == \"Gain Muscle\":\n", | ||
| " focus = [\"strength\", \"cardio\"]\n", | ||
| " cardio_ratio = 0.3\n", | ||
| " elif goal == \"Endurance\":\n", | ||
| " focus = [\"cardio\", \"strength\"]\n", | ||
| " cardio_ratio = 0.7\n", | ||
| " elif goal == \"Strength Training\":\n", | ||
| " focus = [\"strength\", \"flexibility\"]\n", | ||
| " cardio_ratio = 0.2\n", | ||
| " else: # Stay Fit\n", | ||
| " focus = [\"strength\", \"cardio\", \"flexibility\"]\n", | ||
| " cardio_ratio = 0.4\n", | ||
| "\n", | ||
| " # Generate workout plan\n", | ||
| " workout_plan = {\n", | ||
| " \"warm_up\": self._get_warmup_routine(fitness_level),\n", | ||
| " \"main_workout\": self._get_main_workout(focus, fitness_level, available_time, cardio_ratio),\n", | ||
| " \"cool_down\": self._get_cooldown_routine(fitness_level),\n", | ||
| " \"frequency\": self._get_workout_frequency(goal, fitness_level),\n", | ||
| " \"progression\": self._get_progression_plan(fitness_level, goal)\n", | ||
| " }\n", | ||
| "\n", | ||
| " return {\n", | ||
| " \"success\": True,\n", | ||
| " \"goal\": goal,\n", | ||
| " \"fitness_level\": fitness_level,\n", | ||
| " \"workout_plan\": workout_plan,\n", | ||
| " \"tips\": self._get_exercise_tips(goal, fitness_level)\n", | ||
| " }\n", | ||
| " except Exception as e:\n", | ||
| " return {\"error\": f\"Error generating exercise plan: {str(e)}\"}\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Utilize the equipment parameter for exercise selection
The get_exercise_plan method accepts an equipment parameter but doesn't use it to filter exercises. Consider implementing equipment-based filtering for more personalized recommendations.
def get_exercise_plan(self, goal: str, fitness_level: str,
available_time: int, equipment: List[str]) -> Dict[str, Any]:
"""Generate personalized exercise plan"""
try:
+ # Filter exercises based on available equipment
+ equipment_exercises = {
+ "bodyweight": ["Push-ups", "Squats", "Planks", "Lunges", "Burpees"],
+ "dumbbells": ["Dumbbell rows", "Dumbbell press", "Bicep curls"],
+ "barbell": ["Barbell squats", "Deadlifts", "Bench press"],
+ "none": ["Walking", "Running", "Stretching"]
+ }
+
# Determine exercise focus based on goalThen use this filtering when selecting exercises in the _get_main_workout method.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In examples/cookbooks/ai_health_fitness_agent.ipynb around lines 345 to 383, the
get_exercise_plan method receives an equipment parameter but does not use it to
filter exercises. To fix this, modify the method to pass the equipment list to
the _get_main_workout call and update the _get_main_workout method to filter
exercises based on the available equipment, ensuring the workout plan is
personalized according to the user's equipment.
| "import os\n", | ||
| "\n", | ||
| "# Set your Gemini API key\n", | ||
| "gemini_key = \"Enter your api key here\" # Get from https://aistudio.google.com/apikey\n", | ||
| "\n", | ||
| "# Set environment variable\n", | ||
| "os.environ[\"GOOGLE_API_KEY\"] = gemini_key\n", | ||
| "\n", | ||
| "# Model selection\n", | ||
| "model_choice = \"gemini-2.0-flash-exp\" # Options: \"gemini-2.0-flash-exp\", \"gemini-1.5-pro\", \"gemini-1.5-flash\"\n", | ||
| "\n", | ||
| "print(\"✅ API key configured!\")\n", | ||
| "print(f\"✅ Using model: {model_choice}\")" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Secure the API key configuration
Similar to the data analysis notebook, avoid hardcoding API keys. Use environment variables for better security.
import os
# Set your Gemini API key
-gemini_key = "Enter your api key here" # Get from https://aistudio.google.com/apikey
+gemini_key = os.getenv("GOOGLE_API_KEY", "")
+
+if not gemini_key:
+ print("⚠️ Please set the GOOGLE_API_KEY environment variable")
+ print("Get your API key from: https://aistudio.google.com/apikey")
+ raise ValueError("Google API key not found")
# Set environment variable
os.environ["GOOGLE_API_KEY"] = gemini_key📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "import os\n", | |
| "\n", | |
| "# Set your Gemini API key\n", | |
| "gemini_key = \"Enter your api key here\" # Get from https://aistudio.google.com/apikey\n", | |
| "\n", | |
| "# Set environment variable\n", | |
| "os.environ[\"GOOGLE_API_KEY\"] = gemini_key\n", | |
| "\n", | |
| "# Model selection\n", | |
| "model_choice = \"gemini-2.0-flash-exp\" # Options: \"gemini-2.0-flash-exp\", \"gemini-1.5-pro\", \"gemini-1.5-flash\"\n", | |
| "\n", | |
| "print(\"✅ API key configured!\")\n", | |
| "print(f\"✅ Using model: {model_choice}\")" | |
| import os | |
| # Set your Gemini API key | |
| - gemini_key = "Enter your api key here" # Get from https://aistudio.google.com/apikey | |
| + gemini_key = os.getenv("GOOGLE_API_KEY", "") | |
| + | |
| + if not gemini_key: | |
| + print("⚠️ Please set the GOOGLE_API_KEY environment variable") | |
| + print("Get your API key from: https://aistudio.google.com/apikey") | |
| + raise ValueError("Google API key not found") | |
| # Set environment variable | |
| os.environ["GOOGLE_API_KEY"] = gemini_key |
🤖 Prompt for AI Agents
In examples/cookbooks/ai_health_fitness_agent.ipynb around lines 82 to 94, the
API key is hardcoded as a string which is insecure. Modify the code to read the
Gemini API key from an environment variable instead of hardcoding it. Remove the
direct assignment of the key string and use os.environ.get to retrieve the key,
ensuring the key is set externally before running the notebook.
| " def calculate_calories(self, age: int, weight_kg: float, height_cm: float,\n", | ||
| " sex: str, activity_level: str, goal: str) -> Dict[str, Any]:\n", | ||
| " \"\"\"Calculate daily calorie needs based on goals\"\"\"\n", | ||
| " try:\n", | ||
| " # Calculate BMR using Mifflin-St Jeor Equation\n", | ||
| " if sex.lower() == \"male\":\n", | ||
| " bmr = 10 * weight_kg + 6.25 * height_cm - 5 * age + 5\n", | ||
| " else:\n", | ||
| " bmr = 10 * weight_kg + 6.25 * height_cm - 5 * age - 161\n", | ||
| "\n", | ||
| " # Calculate TDEE (Total Daily Energy Expenditure)\n", | ||
| " activity_multiplier = self.activity_multipliers.get(activity_level, 1.2)\n", | ||
| " tdee = bmr * activity_multiplier\n", | ||
| "\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add input validation for sex parameter
The calculate_calories method should validate the sex parameter to handle unexpected inputs gracefully.
def calculate_calories(self, age: int, weight_kg: float, height_cm: float,
sex: str, activity_level: str, goal: str) -> Dict[str, Any]:
"""Calculate daily calorie needs based on goals"""
try:
+ # Validate sex parameter
+ if sex.lower() not in ['male', 'female']:
+ # Default to female formula for 'other' or unrecognized inputs
+ sex = 'female'
+
# Calculate BMR using Mifflin-St Jeor Equation
if sex.lower() == "male":
bmr = 10 * weight_kg + 6.25 * height_cm - 5 * age + 5
else:
bmr = 10 * weight_kg + 6.25 * height_cm - 5 * age - 161📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| " def calculate_calories(self, age: int, weight_kg: float, height_cm: float,\n", | |
| " sex: str, activity_level: str, goal: str) -> Dict[str, Any]:\n", | |
| " \"\"\"Calculate daily calorie needs based on goals\"\"\"\n", | |
| " try:\n", | |
| " # Calculate BMR using Mifflin-St Jeor Equation\n", | |
| " if sex.lower() == \"male\":\n", | |
| " bmr = 10 * weight_kg + 6.25 * height_cm - 5 * age + 5\n", | |
| " else:\n", | |
| " bmr = 10 * weight_kg + 6.25 * height_cm - 5 * age - 161\n", | |
| "\n", | |
| " # Calculate TDEE (Total Daily Energy Expenditure)\n", | |
| " activity_multiplier = self.activity_multipliers.get(activity_level, 1.2)\n", | |
| " tdee = bmr * activity_multiplier\n", | |
| "\n", | |
| def calculate_calories(self, age: int, weight_kg: float, height_cm: float, | |
| sex: str, activity_level: str, goal: str) -> Dict[str, Any]: | |
| """Calculate daily calorie needs based on goals""" | |
| try: | |
| # Validate sex parameter | |
| if sex.lower() not in ['male', 'female']: | |
| # Default to female formula for 'other' or unrecognized inputs | |
| sex = 'female' | |
| # Calculate BMR using Mifflin-St Jeor Equation | |
| if sex.lower() == "male": | |
| bmr = 10 * weight_kg + 6.25 * height_cm - 5 * age + 5 | |
| else: | |
| bmr = 10 * weight_kg + 6.25 * height_cm - 5 * age - 161 | |
| # Calculate TDEE (Total Daily Energy Expenditure) | |
| activity_multiplier = self.activity_multipliers.get(activity_level, 1.2) | |
| tdee = bmr * activity_multiplier | |
| # ... rest of the method unchanged ... |
🤖 Prompt for AI Agents
In examples/cookbooks/ai_health_fitness_agent.ipynb around lines 237 to 250, the
calculate_calories method lacks validation for the sex parameter, which may
cause errors with unexpected inputs. Add input validation to check if sex is
either "male" or "female" (case-insensitive). If the input is invalid, raise a
ValueError with a clear message. This ensures the method handles unexpected sex
values gracefully before proceeding with calculations.
| "import os\n", | ||
| "os.environ[\"OPENAI_API_KEY\"] = \"sk-...\" # <-- Replace with your actual OpenAI API key" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace placeholder API key with secure configuration.
The hardcoded API key placeholder poses a security risk and won't work for users.
-os.environ["OPENAI_API_KEY"] = "sk-..." # <-- Replace with your actual OpenAI API key
+# Set your OpenAI API key via environment variable or input
+import getpass
+if "OPENAI_API_KEY" not in os.environ:
+ os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter your OpenAI API key: ")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "import os\n", | |
| "os.environ[\"OPENAI_API_KEY\"] = \"sk-...\" # <-- Replace with your actual OpenAI API key" | |
| import os | |
| # Set your OpenAI API key via environment variable or input | |
| import getpass | |
| if "OPENAI_API_KEY" not in os.environ: | |
| os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter your OpenAI API key: ") |
🤖 Prompt for AI Agents
In examples/cookbooks/AI_Enrollment_Counselor.ipynb around lines 67 to 68, the
code hardcodes a placeholder OpenAI API key, which is insecure and
non-functional. Remove the hardcoded key and instead load the API key securely
from an environment variable or a secure configuration file. Update the code to
read the API key dynamically, for example using os.environ.get, and provide
instructions for users to set their API key externally.
| "id": "rL-UiB5NOspT" | ||
| }, | ||
| "source": [ | ||
| "[](https://colab.research.google.com/github/Dhivya-Bharathy/PraisonAI/blob/main/examples/cookbooks/AI_Enrollment_Counselor.ipynb)\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify the GitHub repository reference in Colab badge.
The Colab badge references Dhivya-Bharathy/PraisonAI which may not be the main repository.
Please verify if this should reference the main PraisonAI repository instead:
🏁 Script executed:
#!/bin/bash
# Check if the repository reference in the Colab badge is correct
echo "Current reference: Dhivya-Bharathy/PraisonAI"
echo "Expected reference might be: MervinPraison/PraisonAI"
echo ""
echo "Checking repository structure..."
rg -n "github.com.*PraisonAI" --type mdLength of output: 901
Update Colab badge to point at the main repository
The Colab badge in examples/cookbooks/AI_Enrollment_Counselor.ipynb is still referencing Dhivya-Bharathy/PraisonAI; it should use MervinPraison/PraisonAI to match the project’s primary repo.
• File: examples/cookbooks/AI_Enrollment_Counselor.ipynb
Line: ~27
Suggested change:
- "[](https://colab.research.google.com/github/Dhivya-Bharathy/PraisonAI/blob/main/examples/cookbooks/AI_Enrollment_Counselor.ipynb)\n"
+ "[](https://colab.research.google.com/github/MervinPraison/PraisonAI/blob/main/examples/cookbooks/AI_Enrollment_Counselor.ipynb)\n"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "[](https://colab.research.google.com/github/Dhivya-Bharathy/PraisonAI/blob/main/examples/cookbooks/AI_Enrollment_Counselor.ipynb)\n" | |
| "[](https://colab.research.google.com/github/MervinPraison/PraisonAI/blob/main/examples/cookbooks/AI_Enrollment_Counselor.ipynb)\n" |
🤖 Prompt for AI Agents
In examples/cookbooks/AI_Enrollment_Counselor.ipynb at line 27, update the Colab
badge URL to replace the GitHub user path from 'Dhivya-Bharathy/PraisonAI' to
'MervinPraison/PraisonAI' so it correctly points to the main project repository.
User description
An AI-powered agent that analyzes current market and startup trends using real-time news, web search, and multi-agent collaboration.
Features include automated news gathering, article summarization, and actionable trend reports for entrepreneurs and investors.
Built with PraisonAI, supports DuckDuckGo news search, Newspaper3k summarization, and provides clear, actionable insights for any area of interest.
PR Type
Enhancement
Description
• Added comprehensive AI agent cookbook collection with 7+ specialized Jupyter notebooks
• Implemented AI Market & Startup Trend Agent with real-time news search, article summarization, and trend analysis using DuckDuckGo and Newspaper3k
• Created AI Data Analysis Agent with custom visualization, preprocessing, and statistical analysis tools
• Added AI Health & Fitness Agent with BMI calculation, calorie tracking, and personalized exercise recommendations
• Implemented Local RAG Document Q&A Agent using ChromaDB for vector storage and local Ollama models
• Created AI Meme Creator Agent with browser automation and multi-model support (OpenAI, Claude, Deepseek)
• Added AI Enrollment Counselor Agent for university admissions automation and document validation
• All agents feature interactive interfaces, custom tool implementations, and comprehensive YAML configurations
Changes walkthrough 📝
7 files
ai_data_analysis_agent.ipynb
Add AI Data Analysis Agent Jupyter Notebookexamples/cookbooks/ai_data_analysis_agent.ipynb
• Added a comprehensive Jupyter notebook for an AI data analysis agent
with data visualization, preprocessing, and statistical analysis
capabilities
• Implemented custom tools for data visualization
(
DataVisualizationTool), preprocessing (DataPreprocessingTool), andstatistical analysis (
StatisticalAnalysisTool)• Created a complete
interactive application with file upload, automated analysis, and
custom visualization generation
• Included YAML configuration for the
AI agent with specific instructions for data analysis tasks
Ai_Market_Startup_Trend_Agent.ipynb
Add AI Market & Startup Trend Agent Notebookexamples/cookbooks/Ai_Market_Startup_Trend_Agent.ipynb
• Added a Jupyter notebook for an AI-powered market and startup trend
analysis agent
• Implemented custom tools for news search
(
NewsSearchTool), article summarization (ArticleSummaryTool), andtrend extraction (
TrendInsightTool)• Created an interactive
application that searches for recent news, summarizes articles, and
analyzes trends for entrepreneurs and investors
• Configured to use
Anthropic's Claude model with DuckDuckGo search and Newspaper3k for
content processing
ai_health_fitness_agent.ipynb
Add AI Health & Fitness Agent Notebookexamples/cookbooks/ai_health_fitness_agent.ipynb
• Added a comprehensive AI health and fitness agent notebook with
personalized dietary and exercise recommendations
• Implemented custom
tools for BMI calculation, calorie calculation, and exercise
recommendations
• Created interactive interface for user profile input
and health assessment with safety considerations
• Included sample
meal plans and progress tracking recommendations based on user goals
and preferences
local_rag_document_qa_agent.ipynb
Add Local RAG Document Q&A Agent Notebookexamples/cookbooks/local_rag_document_qa_agent.ipynb
• Added a local RAG document Q&A agent using ChromaDB for vector
storage and local LLM inference
• Implemented document processing
tools for PDF, TXT, MD, and CSV formats with text chunking
capabilities
• Created interactive document upload and Q&A session
with vector similarity search
• Integrated with local Ollama models
for document-based question answering without external API calls
ai_market_startup_trend_agent.ipynb
Add AI Market & Startup Trend Agent Notebookexamples/cookbooks/ai_market_startup_trend_agent.ipynb
• Added AI market and startup trend analysis agent using real-time
news search and article summarization
• Implemented tools for
DuckDuckGo news search, Newspaper3k article processing, and trend
extraction
• Created interactive interface for topic-based market
analysis with actionable insights
• Integrated with Anthropic Claude
for intelligent trend analysis and startup opportunity identification
ai_meme_creator_agent.ipynb
AI Meme Creator Agent Notebook Implementationexamples/cookbooks/ai_meme_creator_agent.ipynb
• Added a complete Jupyter notebook for an AI meme creator agent with
browser automation capabilities
• Implemented custom tools for meme
template search, caption generation, and meme validation
• Integrated
multi-model support (OpenAI, Claude, Deepseek) with browser automation
using browser-use library
• Provided comprehensive meme generation
workflow with quality assessment and manual fallback instructions
AI_Enrollment_Counselor.ipynb
AI Enrollment Counselor Agent Notebookexamples/cookbooks/AI_Enrollment_Counselor.ipynb
• Created a Jupyter notebook for an AI enrollment counselor agent for
university admissions automation
• Implemented document validation
functionality to check application completeness
• Added agent
configuration with role, goal, and instructions for admissions
guidance
• Provided examples for document checking and general
admissions question handling
1 files
Summary by CodeRabbit