Skip to content

Conversation

@Dhivya-Bharathy
Copy link
Contributor

@Dhivya-Bharathy Dhivya-Bharathy commented Jul 8, 2025

User description

An AI-powered agent that analyzes current market and startup trends using real-time news, web search, and multi-agent collaboration.
Features include automated news gathering, article summarization, and actionable trend reports for entrepreneurs and investors.
Built with PraisonAI, supports DuckDuckGo news search, Newspaper3k summarization, and provides clear, actionable insights for any area of interest.


PR Type

Enhancement


Description

• Added comprehensive AI agent cookbook collection with 7+ specialized Jupyter notebooks
• Implemented AI Market & Startup Trend Agent with real-time news search, article summarization, and trend analysis using DuckDuckGo and Newspaper3k
• Created AI Data Analysis Agent with custom visualization, preprocessing, and statistical analysis tools
• Added AI Health & Fitness Agent with BMI calculation, calorie tracking, and personalized exercise recommendations
• Implemented Local RAG Document Q&A Agent using ChromaDB for vector storage and local Ollama models
• Created AI Meme Creator Agent with browser automation and multi-model support (OpenAI, Claude, Deepseek)
• Added AI Enrollment Counselor Agent for university admissions automation and document validation
• All agents feature interactive interfaces, custom tool implementations, and comprehensive YAML configurations


Changes walkthrough 📝

Relevant files
Enhancement
7 files
ai_data_analysis_agent.ipynb
Add AI Data Analysis Agent Jupyter Notebook                           

examples/cookbooks/ai_data_analysis_agent.ipynb

• Added a comprehensive Jupyter notebook for an AI data analysis agent
with data visualization, preprocessing, and statistical analysis
capabilities
• Implemented custom tools for data visualization
(DataVisualizationTool), preprocessing (DataPreprocessingTool), and
statistical analysis (StatisticalAnalysisTool)
• Created a complete
interactive application with file upload, automated analysis, and
custom visualization generation
• Included YAML configuration for the
AI agent with specific instructions for data analysis tasks

+1032/-0
Ai_Market_Startup_Trend_Agent.ipynb
Add AI Market & Startup Trend Agent Notebook                         

examples/cookbooks/Ai_Market_Startup_Trend_Agent.ipynb

• Added a Jupyter notebook for an AI-powered market and startup trend
analysis agent
• Implemented custom tools for news search
(NewsSearchTool), article summarization (ArticleSummaryTool), and
trend extraction (TrendInsightTool)
• Created an interactive
application that searches for recent news, summarizes articles, and
analyzes trends for entrepreneurs and investors
• Configured to use
Anthropic's Claude model with DuckDuckGo search and Newspaper3k for
content processing

+402/-0 
ai_health_fitness_agent.ipynb
Add AI Health & Fitness Agent Notebook                                     

examples/cookbooks/ai_health_fitness_agent.ipynb

• Added a comprehensive AI health and fitness agent notebook with
personalized dietary and exercise recommendations
• Implemented custom
tools for BMI calculation, calorie calculation, and exercise
recommendations
• Created interactive interface for user profile input
and health assessment with safety considerations
• Included sample
meal plans and progress tracking recommendations based on user goals
and preferences

+1021/-0
local_rag_document_qa_agent.ipynb
Add Local RAG Document Q&A Agent Notebook                               

examples/cookbooks/local_rag_document_qa_agent.ipynb

• Added a local RAG document Q&A agent using ChromaDB for vector
storage and local LLM inference
• Implemented document processing
tools for PDF, TXT, MD, and CSV formats with text chunking
capabilities
• Created interactive document upload and Q&A session
with vector similarity search
• Integrated with local Ollama models
for document-based question answering without external API calls

+922/-0 
ai_market_startup_trend_agent.ipynb
Add AI Market & Startup Trend Agent Notebook                         

examples/cookbooks/ai_market_startup_trend_agent.ipynb

• Added AI market and startup trend analysis agent using real-time
news search and article summarization
• Implemented tools for
DuckDuckGo news search, Newspaper3k article processing, and trend
extraction
• Created interactive interface for topic-based market
analysis with actionable insights
• Integrated with Anthropic Claude
for intelligent trend analysis and startup opportunity identification

+402/-0 
ai_meme_creator_agent.ipynb
AI Meme Creator Agent Notebook Implementation                       

examples/cookbooks/ai_meme_creator_agent.ipynb

• Added a complete Jupyter notebook for an AI meme creator agent with
browser automation capabilities
• Implemented custom tools for meme
template search, caption generation, and meme validation
• Integrated
multi-model support (OpenAI, Claude, Deepseek) with browser automation
using browser-use library
• Provided comprehensive meme generation
workflow with quality assessment and manual fallback instructions

+800/-0 
AI_Enrollment_Counselor.ipynb
AI Enrollment Counselor Agent Notebook                                     

examples/cookbooks/AI_Enrollment_Counselor.ipynb

• Created a Jupyter notebook for an AI enrollment counselor agent for
university admissions automation
• Implemented document validation
functionality to check application completeness
• Added agent
configuration with role, goal, and instructions for admissions
guidance
• Provided examples for document checking and general
admissions question handling

+444/-0 
Additional files
1 files
intelligent_travel_planning_agent.ipynb +3939/-0

Need help?
  • Type /help how to ... in the comments thread for any questions about Qodo Merge usage.
  • Check out the documentation for more information.
  • Summary by CodeRabbit

    • New Features
      • Added an AI Enrollment Counselor notebook for automating university admissions guidance and document checks.
      • Introduced an AI Market & Startup Trend Agent notebook for real-time trend analysis using news and web search.
      • Released an AI Data Analysis Agent notebook supporting file upload, preprocessing, statistical analysis, and visualization.
      • Added an AI Health & Fitness Agent notebook for personalized health, fitness, and nutrition recommendations.
      • Launched an AI Meme Creator Agent notebook for generating memes with template search, caption generation, and validation.
      • Provided a Local RAG Document QA Agent notebook for document-based question answering using local LLMs and vector databases.

    @coderabbitai
    Copy link
    Contributor

    coderabbitai bot commented Jul 8, 2025

    Walkthrough

    Several new Jupyter notebook examples have been added, each demonstrating an AI agent for a specific domain: university admissions counseling, market/startup trend analysis, data analysis, health and fitness, meme creation, and local document QA with RAG. Each notebook introduces custom tool classes, prompt configurations, and main application logic tailored to its use case.

    Changes

    File(s) Change Summary
    examples/cookbooks/AI_Enrollment_Counselor.ipynb Added notebook for an AI Enrollment Counselor agent, including prompt setup, agent instantiation, a helper function for queries, and example usage for admissions Q&A and document completeness checks.
    examples/cookbooks/Ai_Market_Startup_Trend_Agent.ipynb
    examples/cookbooks/ai_market_startup_trend_agent.ipynb
    Added notebooks for AI agents that analyze market and startup trends using news search, article summarization, and trend extraction tools; both define similar custom tools and main flows for topic-based trend analysis via real-time news.
    examples/cookbooks/ai_data_analysis_agent.ipynb Added notebook for an AI Data Analysis Agent with tools for data visualization, preprocessing, and statistical analysis; includes file upload, auto-insights, and visualization generation.
    examples/cookbooks/ai_health_fitness_agent.ipynb Added notebook for an AI Health & Fitness Agent with tools for BMI calculation, calorie needs estimation, and exercise recommendations; features interactive input and personalized health planning.
    examples/cookbooks/ai_meme_creator_agent.ipynb Added notebook for an AI Meme Creator Agent with tools for template search, caption generation, meme validation, and meme image creation via browser automation; supports multiple LLMs and interactive meme generation.
    examples/cookbooks/local_rag_document_qa_agent.ipynb Added notebook for a Local RAG Document QA Agent with tools for document processing, vector database management, and text chunking; supports document upload, chunking, vector storage, and interactive Q&A via local LLMs.

    Sequence Diagram(s)

    sequenceDiagram
        participant User
        participant Notebook
        participant CustomTool(s)
        participant Agent/LLM
    
        User->>Notebook: Upload data / Enter query
        Notebook->>CustomTool(s): Preprocess input (e.g., file, query)
        CustomTool(s)-->>Notebook: Return processed data/results
        Notebook->>Agent/LLM: Send prompt with context/tools
        Agent/LLM-->>Notebook: Return generated response/analysis
        Notebook-->>User: Display results, visualizations, or answers
    
    Loading

    Possibly related PRs

    • MervinPraison/PraisonAI#732: Adds the same AI Enrollment Counselor notebook, including the ask_enrollment_agent function and agent instance; directly related due to identical functionality.

    Suggested labels

    Review effort 3/5

    Poem

    In the garden of code, new agents appear,
    For memes, trends, and health, their wisdom is clear.
    With data to crunch and documents to scan,
    A rabbit hops by, with a clever plan.
    From fitness to fun, and answers to lend—
    These notebooks bring magic, from start to end!
    🐇✨


    Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

    ❤️ Share
    🪧 Tips

    Chat

    There are 3 ways to chat with CodeRabbit:

    • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
      • I pushed a fix in commit <commit_id>, please review it.
      • Explain this complex logic.
      • Open a follow-up GitHub issue for this discussion.
    • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
      • @coderabbitai explain this code block.
      • @coderabbitai modularize this function.
    • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
      • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
      • @coderabbitai read src/utils.ts and explain its main purpose.
      • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
      • @coderabbitai help me debug CodeRabbit configuration file.

    Support

    Need help? Create a ticket on our support page for assistance with any issues or questions.

    Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

    CodeRabbit Commands (Invoked using PR comments)

    • @coderabbitai pause to pause the reviews on a PR.
    • @coderabbitai resume to resume the paused reviews.
    • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
    • @coderabbitai full review to do a full review from scratch and review all the files again.
    • @coderabbitai summary to regenerate the summary of the PR.
    • @coderabbitai generate docstrings to generate docstrings for this PR.
    • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
    • @coderabbitai resolve resolve all the CodeRabbit review comments.
    • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
    • @coderabbitai help to get help.

    Other keywords and placeholders

    • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
    • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
    • Add @coderabbitai anywhere in the PR title to generate the title automatically.

    CodeRabbit Configuration File (.coderabbit.yaml)

    • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
    • Please see the configuration documentation for more information.
    • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

    Documentation and Community

    • Visit our Documentation for detailed information on how to use CodeRabbit.
    • Join our Discord Community to get help, request features, and share feedback.
    • Follow us on X/Twitter for updates and announcements.

    Copy link
    Contributor

    @gemini-code-assist gemini-code-assist bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Summary of Changes

    Hello @Dhivya-Bharathy, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

    This pull request introduces a suite of six new AI agent notebooks, each designed to address a specific domain, from market analysis and data insights to personal health, educational counseling, creative content generation, and local document Q&A. These additions showcase various applications of AI agents, leveraging different tools and models to provide practical solutions.

    Highlights

    • New AI Agent Notebooks: This pull request introduces six new AI agent notebooks, significantly expanding the examples/cookbooks directory with diverse AI capabilities.
    • AI Market & Startup Trend Agent: A new agent is added for analyzing current market and startup trends, leveraging real-time news, web search (DuckDuckGo), and article summarization (Newspaper3k) to provide actionable insights.
    • AI Enrollment Counselor: A notebook demonstrating an AI agent designed to automate university admissions, capable of answering applicant questions, checking application completeness, and offering personalized guidance.
    • AI Data Analysis Agent: An intelligent agent for comprehensive data analysis, supporting CSV/Excel file uploads, statistical analysis, automatic chart generation (Plotly), and natural language querying.
    • AI Health & Fitness Agent: A personalized health and fitness agent that provides dietary and exercise recommendations, including BMI calculation, calorie needs, macronutrient breakdown, and tailored workout plans.
    • AI Meme Creator Agent: An AI agent that generates memes using browser automation and multiple LLM models (OpenAI, Claude, Deepseek), featuring template search, caption generation, and quality validation.
    • Local RAG Document Agent: A Retrieval-Augmented Generation (RAG) agent that operates locally, processing various document formats (PDF, TXT, MD, CSV), utilizing ChromaDB for vector storage, and Ollama for local LLM inference to answer document-based questions.
    Using Gemini Code Assist

    The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

    Invoking Gemini

    You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

    Feature Command Description
    Code Review /gemini review Performs a code review for the current pull request in its current state.
    Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
    Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
    Help /gemini help Displays a list of available commands.

    Customization

    To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

    Limitations & Feedback

    Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

    You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

    Footnotes

    1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

    @qodo-merge-pro
    Copy link

    qodo-merge-pro bot commented Jul 8, 2025

    PR Reviewer Guide 🔍

    Here are some key observations to aid the review process:

    ⏱️ Estimated effort to review: 4 🔵🔵🔵🔵⚪
    🧪 No relevant tests
    🔒 Security concerns

    Sensitive information exposure:
    The notebook contains a hardcoded API key placeholder pattern (line 85: openai_key = "sk-..") that encourages users to directly embed their actual OpenAI API keys in the notebook code. This creates a high risk of accidental exposure through version control, sharing, or logging. The code should use environment variables or secure configuration files instead of direct string assignment for API keys.

    ⚡ Recommended focus areas for review

    Security Risk

    The notebook contains hardcoded API key placeholder that could lead to accidental exposure of real API keys. The code shows a pattern where users might directly paste their actual API keys into the notebook.

    "openai_key = \"sk-..\"\n",
    "\n",
    
    Resource Management

    Temporary files are created but there's no explicit cleanup mechanism shown. The tempfile.NamedTemporaryFile is created with delete=False, which means files will persist and could accumulate over time.

    "            with tempfile.NamedTemporaryFile(delete=False, suffix=\".csv\") as temp_file:\n",
    "                temp_path = temp_file.name\n",
    "                df.to_csv(temp_path, index=False, quoting=csv.QUOTE_ALL)\n",
    
    Error Handling

    The statistical analysis and visualization methods have broad exception handling that catches all exceptions and returns generic error messages, which could mask important debugging information and make troubleshooting difficult.

    "        except Exception as e:\n",
    "            return f\"Error creating visualization: {str(e)}\"\n",
    

    @qodo-merge-pro
    Copy link

    qodo-merge-pro bot commented Jul 8, 2025

    PR Code Suggestions ✨

    Explore these optional code suggestions:

    CategorySuggestion                                                                                                                                    Impact
    Security
    Secure API key handling

    Hardcoded API keys in notebooks pose a security risk as they can be accidentally
    committed to version control. Use environment variables or secure input methods
    instead to protect sensitive credentials.

    examples/cookbooks/intelligent_travel_planning_agent.ipynb [74-75]

    -OPENAI_API_KEY = "sk-..."  # <-- Replace with your OpenAI API key
    -SERP_API_KEY = "..."       # <-- Replace with your SerpAPI key (optional)
    +import getpass
     
    +# Secure way to input API keys
    +OPENAI_API_KEY = getpass.getpass("Enter your OpenAI API key: ")
    +SERP_API_KEY = getpass.getpass("Enter your SerpAPI key (optional): ")
    +

    [To ensure code accuracy, apply this suggestion manually]

    Suggestion importance[1-10]: 9

    __

    Why: The suggestion correctly identifies a security risk of providing placeholder API keys, which users might replace and accidentally commit, and proposes a much safer method using getpass for interactive input.

    High
    Secure API key handling

    The hardcoded API key placeholder should be replaced with a secure method to
    obtain the key. Consider using environment variables or secure input methods to
    prevent accidental exposure of API keys in version control.

    examples/cookbooks/ai_health_fitness_agent.ipynb [85]

    -gemini_key = "Enter your api key here"  # Get from https://aistudio.google.com/apikey
    +gemini_key = os.getenv("GOOGLE_API_KEY") or input("Enter your Gemini API key: ")

    [To ensure code accuracy, apply this suggestion manually]

    Suggestion importance[1-10]: 7

    __

    Why: The suggestion correctly identifies a hardcoded placeholder for an API key and proposes a more secure method using environment variables or user input, which is a best practice for example notebooks.

    Medium
    Replace hardcoded API key placeholder

    The hardcoded API key should be replaced with a placeholder or environment
    variable reference. Exposing actual API keys in code examples poses a security
    risk and provides no functional value to users.

    examples/cookbooks/ai_data_analysis_agent.ipynb [85]

    -openai_key = "sk-.."
    +openai_key = "your-openai-api-key-here"

    [To ensure code accuracy, apply this suggestion manually]

    Suggestion importance[1-10]: 4

    __

    Why: The suggestion correctly identifies a hardcoded API key placeholder and suggests a more explicit one, which improves clarity for the user, but the existing code already uses a non-functional placeholder.

    Low
    Possible issue
    Handle invalid user input

    Wrap user input parsing in try-except blocks to handle invalid input gracefully.
    This prevents the application from crashing when users enter non-numeric values.

    examples/cookbooks/ai_health_fitness_agent.ipynb [787-789]

    -age = int(input("Age: "))
    -weight = float(input("Weight (kg): "))
    -height = float(input("Height (cm): "))
    +try:
    +    age = int(input("Age: "))
    +    weight = float(input("Weight (kg): "))
    +    height = float(input("Height (cm): "))
    +except ValueError:
    +    print("❌ Please enter valid numeric values")
    +    return

    [To ensure code accuracy, apply this suggestion manually]

    Suggestion importance[1-10]: 8

    __

    Why: This suggestion correctly identifies that converting user input to numeric types without error handling can crash the program. Adding a try-except block is a crucial improvement for the script's robustness and user experience.

    Medium
    Add input validation checks

    Add input validation to prevent division by zero and ensure positive values for
    weight and height. This will prevent runtime errors when invalid data is
    provided.

    examples/cookbooks/ai_health_fitness_agent.ipynb [129-133]

    +# Validate inputs
    +if weight_kg <= 0 or height_cm <= 0:
    +    raise ValueError("Weight and height must be positive values")
    +
     # Convert height from cm to meters
     height_m = height_cm / 100
     
     # Calculate BMI
     bmi = weight_kg / (height_m ** 2)

    [To ensure code accuracy, apply this suggestion manually]

    Suggestion importance[1-10]: 7

    __

    Why: The suggestion correctly points out the lack of input validation for weight_kg and height_cm, which could lead to a ZeroDivisionError. Adding these checks improves the robustness and provides more specific error messages than the existing generic try-except block.

    Medium
    General
    Add file validation and size limits

    The file processing lacks proper validation and could fail with malformed files.
    Add file size limits and validate file structure before processing to prevent
    potential crashes or resource exhaustion.

    examples/cookbooks/ai_data_analysis_agent.ipynb [161-169]

     def preprocess_file(self, file) -> tuple:
         """Preprocess uploaded file and return processed data"""
         try:
    +        # Validate file size (e.g., 50MB limit)
    +        if hasattr(file, 'size') and file.size > 50 * 1024 * 1024:
    +            return None, None, None, "File too large (max 50MB)"
    +        
             if file.name.endswith('.csv'):
                 df = pd.read_csv(file, encoding='utf-8', na_values=['NA', 'N/A', 'missing'])
             elif file.name.endswith('.xlsx'):
                 df = pd.read_excel(file, na_values=['NA', 'N/A', 'missing'])
             else:
                 return None, None, None, "Unsupported file format"
    +        
    +        # Validate DataFrame is not empty
    +        if df.empty:
    +            return None, None, None, "File contains no data"

    [To ensure code accuracy, apply this suggestion manually]

    Suggestion importance[1-10]: 7

    __

    Why: This suggestion correctly points out a lack of input validation and proposes adding checks for file size and empty dataframes, which significantly improves the robustness of the file preprocessing logic.

    Medium
    • More

    @codecov
    Copy link

    codecov bot commented Jul 8, 2025

    Codecov Report

    All modified and coverable lines are covered by tests ✅

    Project coverage is 14.23%. Comparing base (a80bc74) to head (88e40e6).
    Report is 35 commits behind head on main.

    Additional details and impacted files
    @@           Coverage Diff           @@
    ##             main     #755   +/-   ##
    =======================================
      Coverage   14.23%   14.23%           
    =======================================
      Files          25       25           
      Lines        2571     2571           
      Branches      367      367           
    =======================================
      Hits          366      366           
      Misses       2189     2189           
      Partials       16       16           
    Flag Coverage Δ
    quick-validation 0.00% <ø> (ø)
    unit-tests 14.23% <ø> (ø)

    Flags with carried forward coverage won't be shown. Click here to find out more.

    ☔ View full report in Codecov by Sentry.
    📢 Have feedback on the report? Share it here.

    🚀 New features to boost your workflow:
    • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
    • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

    Copy link
    Contributor

    @gemini-code-assist gemini-code-assist bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Code Review

    This pull request adds a collection of 6 new example notebooks, demonstrating various AI agent capabilities. The notebooks are a great addition and showcase different use cases.

    My review focuses on improving security, correctness, and maintainability. The main points are:

    • Security: All notebooks include hardcoded placeholder API keys, which is a security risk. I've suggested using environment variables or getpass instead.
    • Correctness: A few notebooks have bugs that prevent them from running successfully. For example, an incorrect NLTK package name, a compatibility issue with a library, and an incomplete implementation of a core feature.
    • Maintainability: I've pointed out a few places where code can be refactored for better clarity and where error handling can be more specific.

    "\n",
    "# Download all necessary NLTK data for newspaper3k\n",
    "import nltk\n",
    "nltk.download('tokenizers/punkt')\n",
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    critical

    The NLTK package name is punkt, not tokenizers/punkt. This will cause a download error, as seen in the notebook's output. Using quiet=True is also recommended for a cleaner output, consistent with the other downloads.

    nltk.download('punkt', quiet=True)
    

    " return f\"https://i.imgflip.com/{meme_id}.jpg\"\n",
    " return None\n",
    " except Exception as e:\n",
    " print(f\"Error in meme generation: {str(e)}\")\n",
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    critical

    The notebook fails during meme generation with the error "ChatOpenAI" object has no field "ainvoke". This indicates an incompatibility between the browser-use library and the langchain LLM object you are passing. The notebook should be fixed to be runnable, as this error prevents its core functionality from working.

    Comment on lines +67 to +68
    "import os\n",
    "os.environ[\"OPENAI_API_KEY\"] = \"sk-...\" # <-- Replace with your actual OpenAI API key"
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    high

    Hardcoding API keys, even as placeholders, is a security risk as it might lead to users accidentally committing their real keys. It's better to load keys from environment variables or use a secure input method like getpass. This also makes the notebook more portable and secure.

    import os
    from getpass import getpass
    
    # For better security, load the API key from an environment variable or prompt for it.
    # In Google Colab, you can use the "Secrets" tab to store your API key.
    if "OPENAI_API_KEY" not in os.environ:
        os.environ["OPENAI_API_KEY"] = getpass("Enter your OpenAI API key: ")
    

    Comment on lines +81 to +90
    "# Setup Key\n",
    "import os\n",
    "\n",
    "# Set your Anthropic API key\n",
    "anthropic_key = \"your_anthropic_key_here\" # Get from https://console.anthropic.com\n",
    "\n",
    "# Set environment variable\n",
    "os.environ[\"ANTHROPIC_API_KEY\"] = anthropic_key\n",
    "\n",
    "print(\"✅ Anthropic API key configured!\")"
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    high

    Hardcoding API keys, even as placeholders, is a security risk as it might lead to users accidentally committing their real keys. It's better to load keys from environment variables or use a secure input method like getpass. This also makes the notebook more portable and secure.

    # Setup Key
    import os
    from getpass import getpass
    
    # For better security, load the API key from an environment variable or prompt for it.
    # In Google Colab, you can use the "Secrets" tab to store your API key.
    if "ANTHROPIC_API_KEY" not in os.environ:
        os.environ["ANTHROPIC_API_KEY"] = getpass("Enter your Anthropic API key: ")
    
    print("✅ Anthropic API key configured!")
    

    Comment on lines +84 to +87
    "import os\n",
    "openai_key = \"sk-..\"\n",
    "\n",
    "os.environ[\"OPENAI_API_KEY\"] = openai_key\n",
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    high

    Hardcoding API keys, even as placeholders, is a security risk. It's better to load keys from environment variables or use a secure input method like getpass.

    import os
    from getpass import getpass
    
    openai_key = os.getenv("OPENAI_API_KEY")
    if not openai_key:
        openai_key = getpass("Enter your OpenAI API key: ")
    os.environ["OPENAI_API_KEY"] = openai_key
    

    Comment on lines +82 to +98
    "import os\n",
    "\n",
    "# Set your API keys\n",
    "openai_key = \"Enter you api key here\"\n",
    "anthropic_key = \"Enter you api key here\" # Get from https://console.anthropic.com\n",
    "deepseek_key = \"Enter you api key here\" # Get from https://platform.deepseek.com\n",
    "\n",
    "# Set environment variables\n",
    "os.environ[\"OPENAI_API_KEY\"] = openai_key\n",
    "os.environ[\"ANTHROPIC_API_KEY\"] = anthropic_key\n",
    "os.environ[\"DEEPSEEK_API_KEY\"] = deepseek_key\n",
    "\n",
    "# Model selection\n",
    "model_choice = \"OpenAI\" # Options: \"OpenAI\", \"Claude\", \"Deepseek\"\n",
    "\n",
    "print(\"✅ API keys configured!\")\n",
    "print(f\"✅ Using model: {model_choice}\")"
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    high

    Hardcoding API keys, even as placeholders, is a security risk. It's better to load keys from environment variables or use a secure input method like getpass for each key.

    import os
    from getpass import getpass
    
    # For better security, load API keys from environment variables or prompt for them.
    # In Google Colab, you can use the "Secrets" tab to store your API keys.
    if os.getenv("OPENAI_API_KEY") is None:
        os.environ["OPENAI_API_KEY"] = getpass("Enter your OpenAI API key: ")
    if os.getenv("ANTHROPIC_API_KEY") is None:
        os.environ["ANTHROPIC_API_KEY"] = getpass("Enter your Anthropic API key: ")
    if os.getenv("DEEPSEEK_API_KEY") is None:
        os.environ["DEEPSEEK_API_KEY"] = getpass("Enter your Deepseek API key: ")
    
    # Model selection
    model_choice = "OpenAI"  # Options: "OpenAI", "Claude", "Deepseek"
    
    print("✅ API keys configured!")
    print(f"✅ Using model: {model_choice}")
    

    Comment on lines +83 to +86
    "import os\n",
    "openai_key = \"sk-..\"\n",
    "\n",
    "os.environ[\"OPENAI_API_KEY\"] = openai_key\n",
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    high

    Hardcoding API keys, even as placeholders, is a security risk. It's better to load keys from environment variables or use a secure input method like getpass.

    import os
    from getpass import getpass
    
    openai_key = os.getenv("OPENAI_API_KEY")
    if not openai_key:
        openai_key = getpass("Enter your OpenAI API key: ")
    os.environ["OPENAI_API_KEY"] = openai_key
    

    Comment on lines +842 to +845
    " # Here you would integrate with local LLM for answer generation\n",
    " print(f\"\\n💡 AI Answer (using local LLM):\")\n",
    " print(\"Based on the document content, here's what I found...\")\n",
    " print(\"(This would be generated by the local LLM model)\")\n",
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    high

    The notebook is missing the actual integration with a local LLM for generating answers. The code currently just prints a placeholder message. To fulfill the notebook's purpose of being a RAG agent, this part should be implemented.

    For example, you could use the ollama library to call a local model with the retrieved context:

    # Example using ollama
    import ollama
    context = "\n".join([res['documents'][0][i] for i in range(search_result['num_results'])])
    llm_prompt = f"Based on the following context, answer the question.\nContext: {context}\nQuestion: {question}"
    response = ollama.chat(model='llama3', messages=[{'role': 'user', 'content': llm_prompt}])
    print(f"💡 AI Answer: {response['message']['content']}")

    " \"summary\": a.summary,\n",
    " \"url\": article[\"url\"]\n",
    " })\n",
    " except Exception as e:\n",
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    medium

    Catching a broad Exception can hide unrelated errors and make debugging harder. It's better to catch more specific exceptions that you expect from the newspaper3k library, like ArticleException. You would need to import it with from newspaper import ArticleException.

                except Exception as e: # Consider using a more specific exception, e.g., from newspaper import ArticleException
    

    Comment on lines +124 to +142
    " if chart_type == 'bar':\n",
    " fig = px.bar(df, x=x_column, y=y_column, title=title, color_discrete_sequence=['#1f77b4'])\n",
    " elif chart_type == 'line':\n",
    " fig = px.line(df, x=x_column, y=y_column, title=title, color_discrete_sequence=['#2ca02c'])\n",
    " elif chart_type == 'scatter':\n",
    " fig = px.scatter(df, x=x_column, y=y_column, title=title, color_discrete_sequence=['#ff7f0e'])\n",
    " elif chart_type == 'histogram':\n",
    " fig = px.histogram(df, x=x_column, title=title, color_discrete_sequence=['#d62728'])\n",
    " elif chart_type == 'box':\n",
    " fig = px.box(df, x=x_column, y=y_column, title=title, color_discrete_sequence=['#9467bd'])\n",
    " elif chart_type == 'pie':\n",
    " fig = px.pie(df, values=y_column, names=x_column, title=title)\n",
    " elif chart_type == 'heatmap':\n",
    " corr_matrix = df.corr()\n",
    " fig = px.imshow(corr_matrix, title=title, color_continuous_scale='RdBu')\n",
    " elif chart_type == 'area':\n",
    " fig = px.area(df, x=x_column, y=y_column, title=title, color_discrete_sequence=['#8c564b'])\n",
    " else:\n",
    " return \"Unsupported chart type\"\n",
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    medium

    This long if/elif/else chain can be refactored into a dictionary lookup to make it more concise and easier to extend with new chart types. This improves maintainability.

    For example:

    chart_map = {
        'bar': (px.bar, {'color_discrete_sequence': ['#1f77b4']}),
        'line': (px.line, {'color_discrete_sequence': ['#2ca02c']}),
        # ... and so on
    }
    if chart_type in chart_map:
        plot_func, kwargs = chart_map[chart_type]
        # ... handle args and call plot_func
    else:
        return "Unsupported chart type"

    Copy link
    Contributor

    @coderabbitai coderabbitai bot left a comment

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    Actionable comments posted: 17

    🔭 Outside diff range comments (1)
    examples/cookbooks/AI_Enrollment_Counselor.ipynb (1)

    445-1050: Fix critical merge conflict in the notebook structure.

    The file contains duplicate/conflicting content starting at line 445, which appears to be a Git merge conflict that wasn't properly resolved. This makes the notebook invalid JSON and unusable.

    The duplicate content from lines 445-1050 should be completely removed. The correct notebook structure should end at line 444 with the closing brace and newline. Here's what should be done:

    - },
    -  "nbformat": 4,
    -  "nbformat_minor": 0
    -}
    -=======
    -
    -  "nbformat": 4,
    -  "nbformat_minor": 0,
    -  "metadata": {
    -    [... all the duplicate content ...]
    -  }
    -}

    Remove all content from line 445 onwards, as the notebook should properly end at line 444.

    🧹 Nitpick comments (9)
    examples/cookbooks/ai_market_startup_trend_agent.ipynb (2)

    49-49: Consider updating the deprecated newspaper3k package

    The newspaper3k package hasn't been maintained since 2020. Consider using newspaper4k which is an actively maintained fork with better Python 3 support and bug fixes.

    -!pip install praisonai streamlit duckduckgo-search "newspaper3k[lxml]" anthropic lxml_html_clean
    +!pip install praisonai streamlit duckduckgo-search "newspaper4k[lxml]" anthropic

    156-164: Implement actual trend extraction using LLM

    The TrendInsightTool currently just concatenates summaries without performing actual trend analysis. The TODO comment indicates this is incomplete.

    Would you like me to help implement proper trend extraction using the Anthropic API to analyze the summaries and identify patterns, opportunities, and risks?

    examples/cookbooks/local_rag_document_qa_agent.ipynb (2)

    209-213: Make ChromaDB path configurable and handle cleanup

    The ChromaDB path is hardcoded which may cause permission issues in some environments.

     class VectorDatabaseTool:
    -    def __init__(self, collection_name: str = "document_qa"):
    +    def __init__(self, collection_name: str = "document_qa", db_path: str = None):
             self.collection_name = collection_name
    +        if db_path is None:
    +            db_path = os.path.join(tempfile.gettempdir(), "chroma_db")
             # Use new ChromaDB client configuration
    -        self.client = chromadb.PersistentClient(path="./chroma_db")
    +        self.client = chromadb.PersistentClient(path=db_path)
             self.collection = self.client.get_or_create_collection(name=collection_name)

    842-845: Complete the LLM integration for answer generation

    The Q&A loop retrieves relevant chunks but doesn't generate actual answers using the local LLM.

    Would you like me to help implement the integration with Ollama to generate contextual answers based on the retrieved chunks? This would complete the RAG pipeline.

    examples/cookbooks/ai_meme_creator_agent.ipynb (1)

    226-247: Add query validation for caption generation

    The caption generation directly uses the query in formatted strings without length validation, which could create overly long captions.

     def _generate_setup_punchline_captions(self, query: str) -> List[Dict[str, str]]:
         """Generate setup-punchline style captions"""
         captions = []
         
    +    # Truncate query if too long for meme text
    +    if len(query) > 100:
    +        query = query[:97] + "..."
    +    
         # Extract key elements from query
         words = query.split()
    examples/cookbooks/ai_data_analysis_agent.ipynb (2)

    176-178: Improve date column detection logic

    The current date detection only checks if 'date' is in the column name (case-insensitive). Consider using pandas' built-in date detection or checking for common date patterns.

     for col in df.columns:
         if 'date' in col.lower():
             df[col] = pd.to_datetime(df[col], errors='coerce')
    +    # Also try to infer datetime for object columns
    +    elif df[col].dtype == 'object' and not pd.api.types.is_numeric_dtype(df[col]):
    +        try:
    +            # Check if the column contains date-like strings
    +            sample = df[col].dropna().head(10)
    +            if sample.astype(str).str.match(r'\d{4}-\d{2}-\d{2}|\d{2}/\d{2}/\d{4}').any():
    +                df[col] = pd.to_datetime(df[col], errors='coerce')
    +        except:
    +            pass

    834-1014: Well-structured main application with comprehensive features!

    The implementation provides a complete data analysis workflow with file upload, preprocessing, analysis, and visualization. Consider adding a try-except wrapper around the entire main section to handle unexpected errors gracefully.

    You might want to wrap the main execution in a try-except block:

    try:
        # Main Application code...
    except Exception as e:
        print(f"❌ An unexpected error occurred: {str(e)}")
        print("Please check your input data and try again.")
    examples/cookbooks/ai_health_fitness_agent.ipynb (1)

    1-1021: Consider extracting common patterns into shared utilities

    Both notebooks share similar patterns (tool classes, YAML prompts, error handling). Consider creating a shared module for common functionality to improve maintainability and reduce code duplication across the cookbook examples.

    For example:

    • Base tool class with common error handling
    • Shared input validation utilities
    • Common visualization helpers
    • Standardized YAML prompt structure
    examples/cookbooks/AI_Enrollment_Counselor.ipynb (1)

    400-417: Good implementation with potential for enhancement.

    The ask_enrollment_agent helper function is well-designed and handles both document validation and general queries effectively.

    Consider adding input validation and error handling:

     def ask_enrollment_agent(query, submitted=None, required=None):
    +    if not query or not isinstance(query, str):
    +        raise ValueError("Query must be a non-empty string")
    +    
         if submitted and required:
    +        if not isinstance(submitted, list) or not isinstance(required, list):
    +            raise TypeError("Document lists must be of type list")
             prompt = (
                 f"Applicant submitted documents: {submitted}\n"
                 f"Required documents: {required}\n"
                 f"{query}\n"
                 "List any missing documents and provide guidance."
             )
    -        return enrollment_agent.start(prompt)
    +        try:
    +            return enrollment_agent.start(prompt)
    +        except Exception as e:
    +            return f"Error processing request: {str(e)}"
         else:
    -        return enrollment_agent.start(query)
    +        try:
    +            return enrollment_agent.start(query)
    +        except Exception as e:
    +            return f"Error processing request: {str(e)}"
    📜 Review details

    Configuration used: CodeRabbit UI
    Review profile: CHILL
    Plan: Pro

    📥 Commits

    Reviewing files that changed from the base of the PR and between dfc7dc1 and 88e40e6.

    📒 Files selected for processing (7)
    • examples/cookbooks/AI_Enrollment_Counselor.ipynb (2 hunks)
    • examples/cookbooks/Ai_Market_Startup_Trend_Agent.ipynb (1 hunks)
    • examples/cookbooks/ai_data_analysis_agent.ipynb (1 hunks)
    • examples/cookbooks/ai_health_fitness_agent.ipynb (1 hunks)
    • examples/cookbooks/ai_market_startup_trend_agent.ipynb (1 hunks)
    • examples/cookbooks/ai_meme_creator_agent.ipynb (1 hunks)
    • examples/cookbooks/local_rag_document_qa_agent.ipynb (1 hunks)
    🧰 Additional context used
    🧠 Learnings (4)
    📓 Common learnings
    Learnt from: CR
    PR: MervinPraison/PraisonAI#0
    File: src/praisonai-ts/.cursorrules:0-0
    Timestamp: 2025-06-30T10:05:51.843Z
    Learning: Applies to src/praisonai-ts/src/agents/agents.ts : The 'PraisonAIAgents' class in 'src/agents/agents.ts' should manage multiple agents, tasks, memory, and process type, mirroring the Python 'agents.py'.
    
    examples/cookbooks/ai_market_startup_trend_agent.ipynb (1)
    Learnt from: CR
    PR: MervinPraison/PraisonAI#0
    File: src/praisonai-agents/CLAUDE.md:0-0
    Timestamp: 2025-06-30T10:06:17.673Z
    Learning: Use the `Agent` class from `praisonaiagents/agent/` for core agent implementations, supporting LLM integration, tool calling, and self-reflection.
    
    examples/cookbooks/ai_meme_creator_agent.ipynb (1)
    Learnt from: CR
    PR: MervinPraison/PraisonAI#0
    File: src/praisonai-ts/.cursorrules:0-0
    Timestamp: 2025-06-30T10:05:51.843Z
    Learning: Applies to src/praisonai-ts/src/llm/llm.ts : The 'LLM' class in 'llm.ts' should wrap 'aisdk.generateText' calls for generating text responses.
    
    examples/cookbooks/local_rag_document_qa_agent.ipynb (2)
    Learnt from: CR
    PR: MervinPraison/PraisonAI#0
    File: src/praisonai-ts/.cursorrules:0-0
    Timestamp: 2025-06-30T10:05:51.843Z
    Learning: Applies to src/praisonai-ts/src/knowledge/knowledge.ts : The 'Knowledge' class in 'src/knowledge/knowledge.ts' should handle knowledge storage, chunking, and retrieval, including vector store and RAG functionality.
    
    Learnt from: CR
    PR: MervinPraison/PraisonAI#0
    File: src/praisonai-agents/CLAUDE.md:0-0
    Timestamp: 2025-06-30T10:06:17.673Z
    Learning: Applies to src/praisonai-agents/praisonaiagents/{memory,knowledge}/**/*.py : Place memory-related implementations in `praisonaiagents/memory/` and knowledge/document processing in `praisonaiagents/knowledge/`.
    
    ⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
    • GitHub Check: quick-test
    • GitHub Check: test-core (3.11)
    🔇 Additional comments (4)
    examples/cookbooks/local_rag_document_qa_agent.ipynb (1)

    275-301: Review text chunking edge cases

    The chunking logic looks correct but verify that all sentences are included, especially when a sentence exactly fits the remaining chunk size.

    The implementation handles overlapping chunks well and includes the final chunk. Good error handling with try-except.

    examples/cookbooks/ai_meme_creator_agent.ipynb (1)

    726-728: Handle asyncio properly in Google Colab environment

    The code uses await directly which may not work in all Colab environments.

    # Check if we're in an async context and handle accordingly
    import nest_asyncio
    nest_asyncio.apply()
    
    # Then run the async function
    import asyncio
    meme_url = asyncio.run(generate_meme(query, model_choice, api_key))

    Note: You may need to install nest_asyncio for proper async handling in Jupyter/Colab.

    examples/cookbooks/ai_health_fitness_agent.ipynb (1)

    117-224: Excellent BMI calculator implementation!

    The tool provides comprehensive BMI analysis with accurate health categories, personalized recommendations, and health risk assessment. The error handling and structured response format are well-designed.

    examples/cookbooks/AI_Enrollment_Counselor.ipynb (1)

    108-118: LGTM! Clear and well-structured agent configuration.

    The role, goal, and instructions are well-defined and appropriately scoped for an enrollment counselor agent.

    Comment on lines +355 to +359
    " print(\"\\n📰 Article Summaries:\")\n",
    " for i, s in enumerate(summaries, 1):\n",
    " print(f\"{i}. {s['title']}\")\n",
    " print(f\" {s['summary']}\")\n",
    " print(f\" [Read more]({s['url']})\\n\")\n",
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    🛠️ Refactor suggestion

    Add validation for empty summaries

    The code displays summaries without checking if they exist or have content, leading to empty output.

     print("\n📰 Article Summaries:")
     for i, s in enumerate(summaries, 1):
         print(f"{i}. {s['title']}")
    -    print(f"   {s['summary']}")
    +    summary_text = s.get('summary', '').strip()
    +    if summary_text and not summary_text.startswith('Error'):
    +        print(f"   {summary_text}")
    +    else:
    +        print(f"   [Summary not available]")
         print(f"   [Read more]({s['url']})\n")
    📝 Committable suggestion

    ‼️ IMPORTANT
    Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    Suggested change
    " print(\"\\n📰 Article Summaries:\")\n",
    " for i, s in enumerate(summaries, 1):\n",
    " print(f\"{i}. {s['title']}\")\n",
    " print(f\" {s['summary']}\")\n",
    " print(f\" [Read more]({s['url']})\\n\")\n",
    print("\n📰 Article Summaries:")
    for i, s in enumerate(summaries, 1):
    print(f"{i}. {s['title']}")
    summary_text = s.get('summary', '').strip()
    if summary_text and not summary_text.startswith('Error'):
    print(f" {summary_text}")
    else:
    print(f" [Summary not available]")
    print(f" [Read more]({s['url']})\n")
    🤖 Prompt for AI Agents
    In examples/cookbooks/ai_market_startup_trend_agent.ipynb around lines 355 to
    359, the code prints article summaries without verifying if the summaries list
    is empty or contains valid content. Add a validation step before the loop to
    check if summaries exist and are not empty; if they are empty, print a message
    indicating no summaries are available instead of proceeding to print empty
    output.
    

    "\n",
    "temperature: 0.4\n",
    "max_tokens: 3500\n",
    "model: \"claude-3-5-sonnet\"\n",
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    💡 Verification agent

    🧩 Analysis chain

    Verify the Claude model name format

    The model name "claude-3-5-sonnet" may not be in the correct format for the Anthropic API.


    🌐 Web query:

    What is the correct model name format for Claude 3.5 Sonnet in the Anthropic API?
    

    💡 Result:

    In the Anthropic API, the model name for Claude 3.5 Sonnet is claude-3-5-sonnet-20240620. This identifier specifies the model's version and release date, ensuring consistent behavior across different environments. (docs.anthropic.com)

    For convenience during development and testing, Anthropic offers an alias claude-3-5-sonnet-latest, which automatically points to the most recent snapshot of the Claude 3.5 Sonnet model. While aliases are useful for experimentation, it's recommended to use specific model versions (e.g., claude-3-5-sonnet-20240620) in production applications to ensure consistent performance. (docs.anthropic.com)

    Additionally, when using the Anthropic API, it's important to include the model version in the model name to guarantee consistent behavior. For example, to use Claude 3.5 Sonnet in the API, you would specify the model name as claude-3-5-sonnet@20240620. (cloud.google.com)

    Citations:


    Specify the full Claude 3.5 Sonnet model identifier

    The current line hard-codes claude-3-5-sonnet, which will cause API errors. You need to include the version suffix or use the alias. In your notebook:

    • File: examples/cookbooks/ai_market_startup_trend_agent.ipynb
    Line 226: replace

    model: "claude-3-5-sonnet"

    with either the fixed version

    model: "claude-3-5-sonnet-20240620"

    (recommended for production) or the development alias

    model: "claude-3-5-sonnet-latest"

    Using a specific version ensures consistent behavior across environments.

    🤖 Prompt for AI Agents
    In examples/cookbooks/ai_market_startup_trend_agent.ipynb at line 226, the model
    identifier is hard-coded as "claude-3-5-sonnet", which will cause API errors.
    Update this line to specify the full model identifier by replacing it with
    either "claude-3-5-sonnet-20240620" for a fixed production version or
    "claude-3-5-sonnet-latest" for the development alias to ensure consistent
    behavior.
    

    Comment on lines +333 to +336
    "nltk.download('tokenizers/punkt')\n",
    "nltk.download('averaged_perceptron_tagger', quiet=True)\n",
    "nltk.download('maxent_ne_chunker', quiet=True)\n",
    "nltk.download('words', quiet=True)\n",
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    ⚠️ Potential issue

    Fix NLTK data download paths

    The NLTK download path 'tokenizers/punkt' is incorrect, causing download errors. Use the correct package names.

    -nltk.download('tokenizers/punkt')
    +nltk.download('punkt')
     nltk.download('averaged_perceptron_tagger', quiet=True)
     nltk.download('maxent_ne_chunker', quiet=True)
     nltk.download('words', quiet=True)

    Committable suggestion skipped: line range outside the PR's diff.

    🤖 Prompt for AI Agents
    In examples/cookbooks/ai_market_startup_trend_agent.ipynb around lines 333 to
    336, the NLTK download path 'tokenizers/punkt' is incorrect and causes errors.
    Replace 'tokenizers/punkt' with the correct package name 'punkt' in the
    nltk.download call to fix the download issue.
    

    Comment on lines +85 to +90
    "anthropic_key = \"your_anthropic_key_here\" # Get from https://console.anthropic.com\n",
    "\n",
    "# Set environment variable\n",
    "os.environ[\"ANTHROPIC_API_KEY\"] = anthropic_key\n",
    "\n",
    "print(\"✅ Anthropic API key configured!\")"
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    ⚠️ Potential issue

    Add API key validation to prevent runtime errors

    The code sets a placeholder API key without validation. This will cause runtime errors when making API calls.

     # Set your Anthropic API key
     anthropic_key = "your_anthropic_key_here"  # Get from https://console.anthropic.com
     
    +# Validate API key
    +if anthropic_key == "your_anthropic_key_here" or not anthropic_key:
    +    raise ValueError("Please set a valid Anthropic API key")
    +
     # Set environment variable
     os.environ["ANTHROPIC_API_KEY"] = anthropic_key
    🤖 Prompt for AI Agents
    In examples/cookbooks/ai_market_startup_trend_agent.ipynb around lines 85 to 90,
    the code sets a placeholder Anthropic API key without validating it, which can
    lead to runtime errors during API calls. Add a validation step after setting the
    environment variable to check if the API key is not the placeholder or empty,
    and raise an informative error or prompt the user to provide a valid key before
    proceeding.
    

    Comment on lines +1 to +402
    {
    "cells": [
    {
    "cell_type": "markdown",
    "metadata": {
    "id": "Fi_y0ooAzjjy"
    },
    "source": [
    "# AI Market & Startup Trend Agent"
    ]
    },
    {
    "cell_type": "markdown",
    "metadata": {
    "id": "M18iBcPYzl9d"
    },
    "source": [
    "* An AI-powered agent that analyzes current market and startup trends using real-time news, web search, and multi-agent collaboration.\n",
    "* The agent collects recent articles, summarizes key insights, and identifies emerging opportunities for entrepreneurs and investors.\n",
    "* Features include automated news gathering, trend summarization, and actionable reports on startup opportunities in any area of interest."
    ]
    },
    {
    "cell_type": "markdown",
    "metadata": {
    "id": "SfsvzoD_3JtE"
    },
    "source": [
    "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Dhivya-Bharathy/PraisonAI/blob/main/examples/cookbooks/ai_market_startup_trend_agent.ipynb)\n"
    ]
    },
    {
    "cell_type": "markdown",
    "metadata": {
    "id": "6par7OlW0KVF"
    },
    "source": [
    "# Dependencies"
    ]
    },
    {
    "cell_type": "code",
    "execution_count": null,
    "metadata": {
    "id": "oJA89ujM0OSd"
    },
    "outputs": [],
    "source": [
    "!pip install praisonai streamlit duckduckgo-search \"newspaper3k[lxml]\" anthropic lxml_html_clean"
    ]
    },
    {
    "cell_type": "markdown",
    "metadata": {
    "id": "bK4r7sQ_0hz0"
    },
    "source": [
    "# Setup Key"
    ]
    },
    {
    "cell_type": "code",
    "execution_count": 5,
    "metadata": {
    "colab": {
    "base_uri": "https://localhost:8080/"
    },
    "id": "9UViB-oJ0izp",
    "outputId": "f1db522f-5e30-4f14-8fc1-5c5b814a10c7"
    },
    "outputs": [
    {
    "name": "stdout",
    "output_type": "stream",
    "text": [
    "✅ Anthropic API key configured!\n"
    ]
    }
    ],
    "source": [
    "# Setup Key\n",
    "import os\n",
    "\n",
    "# Set your Anthropic API key\n",
    "anthropic_key = \"your_anthropic_key_here\" # Get from https://console.anthropic.com\n",
    "\n",
    "# Set environment variable\n",
    "os.environ[\"ANTHROPIC_API_KEY\"] = anthropic_key\n",
    "\n",
    "print(\"✅ Anthropic API key configured!\")"
    ]
    },
    {
    "cell_type": "markdown",
    "metadata": {
    "id": "TX6AKmiE0sK3"
    },
    "source": [
    "# Tools"
    ]
    },
    {
    "cell_type": "code",
    "execution_count": 6,
    "metadata": {
    "id": "afU4kjcz0tCk"
    },
    "outputs": [],
    "source": [
    "# Custom News & Trend Analysis Tools\n",
    "\n",
    "from duckduckgo_search import DDGS\n",
    "from newspaper import Article\n",
    "from typing import List, Dict, Any\n",
    "\n",
    "class NewsSearchTool:\n",
    " def __init__(self, max_results: int = 5):\n",
    " self.max_results = max_results\n",
    "\n",
    " def search_news(self, topic: str) -> List[Dict[str, Any]]:\n",
    " \"\"\"Search for recent news articles on a topic using DuckDuckGo.\"\"\"\n",
    " results = []\n",
    " with DDGS() as ddgs:\n",
    " for r in ddgs.news(topic, max_results=self.max_results):\n",
    " results.append({\n",
    " \"title\": r.get(\"title\"),\n",
    " \"url\": r.get(\"url\"),\n",
    " \"date\": r.get(\"date\"),\n",
    " \"body\": r.get(\"body\")\n",
    " })\n",
    " return results\n",
    "\n",
    "class ArticleSummaryTool:\n",
    " def summarize_articles(self, articles: List[Dict[str, Any]]) -> List[Dict[str, Any]]:\n",
    " \"\"\"Summarize the content of news articles using Newspaper3k.\"\"\"\n",
    " summaries = []\n",
    " for article in articles:\n",
    " try:\n",
    " a = Article(article[\"url\"])\n",
    " a.download()\n",
    " a.parse()\n",
    " a.nlp()\n",
    " summaries.append({\n",
    " \"title\": article[\"title\"],\n",
    " \"summary\": a.summary,\n",
    " \"url\": article[\"url\"]\n",
    " })\n",
    " except Exception as e:\n",
    " summaries.append({\n",
    " \"title\": article[\"title\"],\n",
    " \"summary\": f\"Error summarizing article: {str(e)}\",\n",
    " \"url\": article[\"url\"]\n",
    " })\n",
    " return summaries\n",
    "\n",
    "class TrendInsightTool:\n",
    " def extract_trends(self, summaries: List[Dict[str, Any]]) -> Dict[str, Any]:\n",
    " \"\"\"Extract and aggregate trend insights from article summaries.\"\"\"\n",
    " all_text = \" \".join([s[\"summary\"] for s in summaries if \"summary\" in s])\n",
    " # For demo: just return the combined summaries\n",
    " # In production: use LLM to extract trends and opportunities\n",
    " return {\n",
    " \"trend_report\": all_text[:2000] + (\"...\" if len(all_text) > 2000 else \"\")\n",
    " }"
    ]
    },
    {
    "cell_type": "markdown",
    "metadata": {
    "id": "CVCoaGTO0-Ue"
    },
    "source": [
    "# YAML Prompt"
    ]
    },
    {
    "cell_type": "code",
    "execution_count": 7,
    "metadata": {
    "colab": {
    "base_uri": "https://localhost:8080/"
    },
    "id": "6xu0NeHy0_ZV",
    "outputId": "2d85f20a-8af4-49a4-ec76-9db4102e0d51"
    },
    "outputs": [
    {
    "name": "stdout",
    "output_type": "stream",
    "text": [
    "✅ YAML Prompt configured!\n"
    ]
    }
    ],
    "source": [
    "# YAML Prompt\n",
    "yaml_prompt = \"\"\"\n",
    "name: \"AI Market & Startup Trend Agent\"\n",
    "description: \"Expert market analyst that gathers, summarizes, and analyzes startup and market trends from real-time news sources\"\n",
    "instructions:\n",
    " - \"You are an expert market and startup trend analyst\"\n",
    " - \"Search for the latest news and articles on the user's topic of interest\"\n",
    " - \"Summarize the key points and insights from each article\"\n",
    " - \"Aggregate the summaries to identify emerging trends and startup opportunities\"\n",
    " - \"Present findings in a clear, actionable report for entrepreneurs and investors\"\n",
    " - \"Cite sources and provide links to original articles\"\n",
    " - \"Highlight any patterns, risks, or opportunities you discover\"\n",
    " - \"Use bullet points and markdown formatting for clarity\"\n",
    "\n",
    "tools:\n",
    " - name: \"NewsSearchTool\"\n",
    " description: \"Searches for recent news articles on a given topic using DuckDuckGo\"\n",
    " - name: \"ArticleSummaryTool\"\n",
    " description: \"Summarizes the content of news articles using Newspaper3k\"\n",
    " - name: \"TrendInsightTool\"\n",
    " description: \"Extracts and aggregates trend insights from article summaries\"\n",
    "\n",
    "output_format:\n",
    " - \"Provide a trend analysis report with actionable insights\"\n",
    " - \"Include a list of summarized articles with links\"\n",
    " - \"Highlight key opportunities and risks\"\n",
    " - \"Use clear, structured formatting with sections for news, summaries, and trends\"\n",
    "\n",
    "temperature: 0.4\n",
    "max_tokens: 3500\n",
    "model: \"claude-3-5-sonnet\"\n",
    "\"\"\"\n",
    "\n",
    "print(\"✅ YAML Prompt configured!\")"
    ]
    },
    {
    "cell_type": "markdown",
    "metadata": {
    "id": "MaX_eyRB1jtF"
    },
    "source": [
    "# Main"
    ]
    },
    {
    "cell_type": "code",
    "execution_count": 13,
    "metadata": {
    "colab": {
    "base_uri": "https://localhost:8080/"
    },
    "collapsed": true,
    "id": "fmMkG5hR1lS7",
    "outputId": "0833615c-c919-406e-9646-9cd8d74f78ed"
    },
    "outputs": [
    {
    "name": "stderr",
    "output_type": "stream",
    "text": [
    "[nltk_data] Error loading tokenizers/punkt: Package 'tokenizers/punkt'\n",
    "[nltk_data] not found in index\n"
    ]
    },
    {
    "name": "stdout",
    "output_type": "stream",
    "text": [
    "📈 AI Market & Startup Trend Agent\n",
    "Analyze current market and startup trends using real-time news and AI summarization!\n",
    "\n",
    "Enter the area of interest for your Startup or Market Trend Analysis: Chennai\n",
    "\n",
    "🔍 Searching for recent news on: Chennai\n"
    ]
    },
    {
    "name": "stderr",
    "output_type": "stream",
    "text": [
    "/tmp/ipython-input-6-1293765785.py:14: RuntimeWarning: This package (`duckduckgo_search`) has been renamed to `ddgs`! Use `pip install ddgs` instead.\n",
    " with DDGS() as ddgs:\n"
    ]
    },
    {
    "name": "stdout",
    "output_type": "stream",
    "text": [
    "✅ Found 5 articles. Summarizing...\n",
    "\n",
    "📰 Article Summaries:\n",
    "1. Chennai woman breaks down in front of traffic cop, shares the reason in viral post\n",
    " \n",
    " [Read more](https://www.msn.com/en-in/news/other/chennai-woman-breaks-down-in-front-of-traffic-cop-shares-the-reason-in-viral-post/ar-AA1IalYh)\n",
    "\n",
    "2. Chennai founder breaks down in front of traffic police after his unexpected question: 'And that's when the tears came'\n",
    " \n",
    " [Read more](https://www.msn.com/en-in/news/India/chennai-founder-breaks-down-in-front-of-traffic-police-after-his-unexpected-question-and-that-s-when-the-tears-came/ar-AA1Ia579)\n",
    "\n",
    "3. Explore India's coastal flavours at this Chennai food fest\n",
    " \n",
    " [Read more](https://www.msn.com/en-in/foodanddrink/other/explore-indias-coastal-flavours-at-this-chennai-food-fest/ar-AA1Iax3e)\n",
    "\n",
    "4. Chennai weather update: Expect patchy rain and a warm summer day\n",
    " \n",
    " [Read more](https://www.msn.com/en-in/news/india/chennai-weather-update-expect-patchy-rain-and-a-warm-summer-day/ar-AA1I9ECV)\n",
    "\n",
    "5. Chennai Power Outage Alert: Check Areas That Will Face Supply Disruption On July 8 For Maintenance Work\n",
    " \n",
    " [Read more](https://www.msn.com/en-in/autos/photos/chennai-power-outage-alert-check-areas-that-will-face-supply-disruption-on-july-8-for-maintenance-work/ar-AA1I9KIg)\n",
    "\n",
    "📊 Analyzing trends and opportunities...\n",
    "\n",
    "=== Trend Analysis Report ===\n",
    " \n",
    "\n",
    "🧪 Sample Topics for Testing\n",
    "1. AI in healthcare\n",
    "2. Sustainable energy startups\n",
    "3. Fintech innovation\n",
    "4. Remote work technology\n",
    "5. Climate tech investments\n",
    "\n",
    "==================================================\n",
    "📈 Powered by AI Market & Startup Trend Agent | Built with PraisonAI\n"
    ]
    }
    ],
    "source": [
    "# Main Application (Google Colab Version)\n",
    "import os\n",
    "import warnings\n",
    "warnings.filterwarnings(\"ignore\", category=ImportWarning)\n",
    "\n",
    "# Download all necessary NLTK data for newspaper3k\n",
    "import nltk\n",
    "nltk.download('tokenizers/punkt')\n",
    "nltk.download('averaged_perceptron_tagger', quiet=True)\n",
    "nltk.download('maxent_ne_chunker', quiet=True)\n",
    "nltk.download('words', quiet=True)\n",
    "\n",
    "# Initialize tools\n",
    "news_tool = NewsSearchTool()\n",
    "summary_tool = ArticleSummaryTool()\n",
    "trend_tool = TrendInsightTool()\n",
    "\n",
    "print(\"📈 AI Market & Startup Trend Agent\")\n",
    "print(\"Analyze current market and startup trends using real-time news and AI summarization!\")\n",
    "\n",
    "# User input\n",
    "topic = input(\"\\nEnter the area of interest for your Startup or Market Trend Analysis: \").strip()\n",
    "\n",
    "if topic:\n",
    " print(f\"\\n🔍 Searching for recent news on: {topic}\")\n",
    " articles = news_tool.search_news(topic)\n",
    " if articles:\n",
    " print(f\"✅ Found {len(articles)} articles. Summarizing...\")\n",
    " summaries = summary_tool.summarize_articles(articles)\n",
    " print(\"\\n📰 Article Summaries:\")\n",
    " for i, s in enumerate(summaries, 1):\n",
    " print(f\"{i}. {s['title']}\")\n",
    " print(f\" {s['summary']}\")\n",
    " print(f\" [Read more]({s['url']})\\n\")\n",
    "\n",
    " print(\"📊 Analyzing trends and opportunities...\")\n",
    " trend_report = trend_tool.extract_trends(summaries)\n",
    " print(\"\\n=== Trend Analysis Report ===\")\n",
    " print(trend_report[\"trend_report\"])\n",
    " else:\n",
    " print(\"❌ No news articles found for this topic.\")\n",
    "else:\n",
    " print(\"❌ No topic entered. Please provide a topic to analyze.\")\n",
    "\n",
    "# Sample topics for testing\n",
    "print(\"\\n🧪 Sample Topics for Testing\")\n",
    "sample_topics = [\n",
    " \"AI in healthcare\",\n",
    " \"Sustainable energy startups\",\n",
    " \"Fintech innovation\",\n",
    " \"Remote work technology\",\n",
    " \"Climate tech investments\"\n",
    "]\n",
    "for i, t in enumerate(sample_topics, 1):\n",
    " print(f\"{i}. {t}\")\n",
    "\n",
    "# Footer\n",
    "print(\"\\n\" + \"=\"*50)\n",
    "print(\"📈 Powered by AI Market & Startup Trend Agent | Built with PraisonAI\")"
    ]
    }
    ],
    "metadata": {
    "colab": {
    "provenance": []
    },
    "kernelspec": {
    "display_name": "Python 3",
    "name": "python3"
    },
    "language_info": {
    "name": "python"
    }
    },
    "nbformat": 4,
    "nbformat_minor": 0
    }
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    ⚠️ Potential issue

    Remove duplicate file

    This file is identical to ai_market_startup_trend_agent.ipynb with only filename capitalization differences. Having duplicate files can cause confusion and maintenance issues.

    Consider removing this duplicate file and keeping only one version with consistent naming convention (preferably lowercase as per Python conventions).

    🤖 Prompt for AI Agents
    In examples/cookbooks/Ai_Market_Startup_Trend_Agent.ipynb lines 1 to 402, this
    file is a duplicate of ai_market_startup_trend_agent.ipynb differing only in
    filename capitalization. To avoid confusion and maintenance overhead, remove
    this duplicate file entirely and keep only the lowercase named version
    ai_market_startup_trend_agent.ipynb as per Python naming conventions.
    

    Comment on lines +345 to +383
    " def get_exercise_plan(self, goal: str, fitness_level: str,\n",
    " available_time: int, equipment: List[str]) -> Dict[str, Any]:\n",
    " \"\"\"Generate personalized exercise plan\"\"\"\n",
    " try:\n",
    " # Determine exercise focus based on goal\n",
    " if goal == \"Lose Weight\":\n",
    " focus = [\"cardio\", \"strength\"]\n",
    " cardio_ratio = 0.6\n",
    " elif goal == \"Gain Muscle\":\n",
    " focus = [\"strength\", \"cardio\"]\n",
    " cardio_ratio = 0.3\n",
    " elif goal == \"Endurance\":\n",
    " focus = [\"cardio\", \"strength\"]\n",
    " cardio_ratio = 0.7\n",
    " elif goal == \"Strength Training\":\n",
    " focus = [\"strength\", \"flexibility\"]\n",
    " cardio_ratio = 0.2\n",
    " else: # Stay Fit\n",
    " focus = [\"strength\", \"cardio\", \"flexibility\"]\n",
    " cardio_ratio = 0.4\n",
    "\n",
    " # Generate workout plan\n",
    " workout_plan = {\n",
    " \"warm_up\": self._get_warmup_routine(fitness_level),\n",
    " \"main_workout\": self._get_main_workout(focus, fitness_level, available_time, cardio_ratio),\n",
    " \"cool_down\": self._get_cooldown_routine(fitness_level),\n",
    " \"frequency\": self._get_workout_frequency(goal, fitness_level),\n",
    " \"progression\": self._get_progression_plan(fitness_level, goal)\n",
    " }\n",
    "\n",
    " return {\n",
    " \"success\": True,\n",
    " \"goal\": goal,\n",
    " \"fitness_level\": fitness_level,\n",
    " \"workout_plan\": workout_plan,\n",
    " \"tips\": self._get_exercise_tips(goal, fitness_level)\n",
    " }\n",
    " except Exception as e:\n",
    " return {\"error\": f\"Error generating exercise plan: {str(e)}\"}\n",
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    🛠️ Refactor suggestion

    Utilize the equipment parameter for exercise selection

    The get_exercise_plan method accepts an equipment parameter but doesn't use it to filter exercises. Consider implementing equipment-based filtering for more personalized recommendations.

     def get_exercise_plan(self, goal: str, fitness_level: str,
                          available_time: int, equipment: List[str]) -> Dict[str, Any]:
         """Generate personalized exercise plan"""
         try:
    +        # Filter exercises based on available equipment
    +        equipment_exercises = {
    +            "bodyweight": ["Push-ups", "Squats", "Planks", "Lunges", "Burpees"],
    +            "dumbbells": ["Dumbbell rows", "Dumbbell press", "Bicep curls"],
    +            "barbell": ["Barbell squats", "Deadlifts", "Bench press"],
    +            "none": ["Walking", "Running", "Stretching"]
    +        }
    +        
             # Determine exercise focus based on goal

    Then use this filtering when selecting exercises in the _get_main_workout method.

    Committable suggestion skipped: line range outside the PR's diff.

    🤖 Prompt for AI Agents
    In examples/cookbooks/ai_health_fitness_agent.ipynb around lines 345 to 383, the
    get_exercise_plan method receives an equipment parameter but does not use it to
    filter exercises. To fix this, modify the method to pass the equipment list to
    the _get_main_workout call and update the _get_main_workout method to filter
    exercises based on the available equipment, ensuring the workout plan is
    personalized according to the user's equipment.
    

    Comment on lines +82 to +94
    "import os\n",
    "\n",
    "# Set your Gemini API key\n",
    "gemini_key = \"Enter your api key here\" # Get from https://aistudio.google.com/apikey\n",
    "\n",
    "# Set environment variable\n",
    "os.environ[\"GOOGLE_API_KEY\"] = gemini_key\n",
    "\n",
    "# Model selection\n",
    "model_choice = \"gemini-2.0-flash-exp\" # Options: \"gemini-2.0-flash-exp\", \"gemini-1.5-pro\", \"gemini-1.5-flash\"\n",
    "\n",
    "print(\"✅ API key configured!\")\n",
    "print(f\"✅ Using model: {model_choice}\")"
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    ⚠️ Potential issue

    Secure the API key configuration

    Similar to the data analysis notebook, avoid hardcoding API keys. Use environment variables for better security.

     import os
     
     # Set your Gemini API key
    -gemini_key = "Enter your api key here"  # Get from https://aistudio.google.com/apikey
    +gemini_key = os.getenv("GOOGLE_API_KEY", "")
    +
    +if not gemini_key:
    +    print("⚠️ Please set the GOOGLE_API_KEY environment variable")
    +    print("Get your API key from: https://aistudio.google.com/apikey")
    +    raise ValueError("Google API key not found")
     
     # Set environment variable
     os.environ["GOOGLE_API_KEY"] = gemini_key
    📝 Committable suggestion

    ‼️ IMPORTANT
    Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    Suggested change
    "import os\n",
    "\n",
    "# Set your Gemini API key\n",
    "gemini_key = \"Enter your api key here\" # Get from https://aistudio.google.com/apikey\n",
    "\n",
    "# Set environment variable\n",
    "os.environ[\"GOOGLE_API_KEY\"] = gemini_key\n",
    "\n",
    "# Model selection\n",
    "model_choice = \"gemini-2.0-flash-exp\" # Options: \"gemini-2.0-flash-exp\", \"gemini-1.5-pro\", \"gemini-1.5-flash\"\n",
    "\n",
    "print(\"✅ API key configured!\")\n",
    "print(f\"✅ Using model: {model_choice}\")"
    import os
    # Set your Gemini API key
    - gemini_key = "Enter your api key here" # Get from https://aistudio.google.com/apikey
    + gemini_key = os.getenv("GOOGLE_API_KEY", "")
    +
    + if not gemini_key:
    + print("⚠️ Please set the GOOGLE_API_KEY environment variable")
    + print("Get your API key from: https://aistudio.google.com/apikey")
    + raise ValueError("Google API key not found")
    # Set environment variable
    os.environ["GOOGLE_API_KEY"] = gemini_key
    🤖 Prompt for AI Agents
    In examples/cookbooks/ai_health_fitness_agent.ipynb around lines 82 to 94, the
    API key is hardcoded as a string which is insecure. Modify the code to read the
    Gemini API key from an environment variable instead of hardcoding it. Remove the
    direct assignment of the key string and use os.environ.get to retrieve the key,
    ensuring the key is set externally before running the notebook.
    

    Comment on lines +237 to +250
    " def calculate_calories(self, age: int, weight_kg: float, height_cm: float,\n",
    " sex: str, activity_level: str, goal: str) -> Dict[str, Any]:\n",
    " \"\"\"Calculate daily calorie needs based on goals\"\"\"\n",
    " try:\n",
    " # Calculate BMR using Mifflin-St Jeor Equation\n",
    " if sex.lower() == \"male\":\n",
    " bmr = 10 * weight_kg + 6.25 * height_cm - 5 * age + 5\n",
    " else:\n",
    " bmr = 10 * weight_kg + 6.25 * height_cm - 5 * age - 161\n",
    "\n",
    " # Calculate TDEE (Total Daily Energy Expenditure)\n",
    " activity_multiplier = self.activity_multipliers.get(activity_level, 1.2)\n",
    " tdee = bmr * activity_multiplier\n",
    "\n",
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    🛠️ Refactor suggestion

    Add input validation for sex parameter

    The calculate_calories method should validate the sex parameter to handle unexpected inputs gracefully.

     def calculate_calories(self, age: int, weight_kg: float, height_cm: float,
                           sex: str, activity_level: str, goal: str) -> Dict[str, Any]:
         """Calculate daily calorie needs based on goals"""
         try:
    +        # Validate sex parameter
    +        if sex.lower() not in ['male', 'female']:
    +            # Default to female formula for 'other' or unrecognized inputs
    +            sex = 'female'
    +        
             # Calculate BMR using Mifflin-St Jeor Equation
             if sex.lower() == "male":
                 bmr = 10 * weight_kg + 6.25 * height_cm - 5 * age + 5
             else:
                 bmr = 10 * weight_kg + 6.25 * height_cm - 5 * age - 161
    📝 Committable suggestion

    ‼️ IMPORTANT
    Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    Suggested change
    " def calculate_calories(self, age: int, weight_kg: float, height_cm: float,\n",
    " sex: str, activity_level: str, goal: str) -> Dict[str, Any]:\n",
    " \"\"\"Calculate daily calorie needs based on goals\"\"\"\n",
    " try:\n",
    " # Calculate BMR using Mifflin-St Jeor Equation\n",
    " if sex.lower() == \"male\":\n",
    " bmr = 10 * weight_kg + 6.25 * height_cm - 5 * age + 5\n",
    " else:\n",
    " bmr = 10 * weight_kg + 6.25 * height_cm - 5 * age - 161\n",
    "\n",
    " # Calculate TDEE (Total Daily Energy Expenditure)\n",
    " activity_multiplier = self.activity_multipliers.get(activity_level, 1.2)\n",
    " tdee = bmr * activity_multiplier\n",
    "\n",
    def calculate_calories(self, age: int, weight_kg: float, height_cm: float,
    sex: str, activity_level: str, goal: str) -> Dict[str, Any]:
    """Calculate daily calorie needs based on goals"""
    try:
    # Validate sex parameter
    if sex.lower() not in ['male', 'female']:
    # Default to female formula for 'other' or unrecognized inputs
    sex = 'female'
    # Calculate BMR using Mifflin-St Jeor Equation
    if sex.lower() == "male":
    bmr = 10 * weight_kg + 6.25 * height_cm - 5 * age + 5
    else:
    bmr = 10 * weight_kg + 6.25 * height_cm - 5 * age - 161
    # Calculate TDEE (Total Daily Energy Expenditure)
    activity_multiplier = self.activity_multipliers.get(activity_level, 1.2)
    tdee = bmr * activity_multiplier
    # ... rest of the method unchanged ...
    🤖 Prompt for AI Agents
    In examples/cookbooks/ai_health_fitness_agent.ipynb around lines 237 to 250, the
    calculate_calories method lacks validation for the sex parameter, which may
    cause errors with unexpected inputs. Add input validation to check if sex is
    either "male" or "female" (case-insensitive). If the input is invalid, raise a
    ValueError with a clear message. This ensures the method handles unexpected sex
    values gracefully before proceeding with calculations.
    

    Comment on lines +67 to +68
    "import os\n",
    "os.environ[\"OPENAI_API_KEY\"] = \"sk-...\" # <-- Replace with your actual OpenAI API key"
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    ⚠️ Potential issue

    Replace placeholder API key with secure configuration.

    The hardcoded API key placeholder poses a security risk and won't work for users.

    -os.environ["OPENAI_API_KEY"] = "sk-..."  # <-- Replace with your actual OpenAI API key
    +# Set your OpenAI API key via environment variable or input
    +import getpass
    +if "OPENAI_API_KEY" not in os.environ:
    +    os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter your OpenAI API key: ")
    📝 Committable suggestion

    ‼️ IMPORTANT
    Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    Suggested change
    "import os\n",
    "os.environ[\"OPENAI_API_KEY\"] = \"sk-...\" # <-- Replace with your actual OpenAI API key"
    import os
    # Set your OpenAI API key via environment variable or input
    import getpass
    if "OPENAI_API_KEY" not in os.environ:
    os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter your OpenAI API key: ")
    🤖 Prompt for AI Agents
    In examples/cookbooks/AI_Enrollment_Counselor.ipynb around lines 67 to 68, the
    code hardcodes a placeholder OpenAI API key, which is insecure and
    non-functional. Remove the hardcoded key and instead load the API key securely
    from an environment variable or a secure configuration file. Update the code to
    read the API key dynamically, for example using os.environ.get, and provide
    instructions for users to set their API key externally.
    

    "id": "rL-UiB5NOspT"
    },
    "source": [
    "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Dhivya-Bharathy/PraisonAI/blob/main/examples/cookbooks/AI_Enrollment_Counselor.ipynb)\n"
    Copy link
    Contributor

    Choose a reason for hiding this comment

    The reason will be displayed to describe this comment to others. Learn more.

    💡 Verification agent

    🧩 Analysis chain

    Verify the GitHub repository reference in Colab badge.

    The Colab badge references Dhivya-Bharathy/PraisonAI which may not be the main repository.

    Please verify if this should reference the main PraisonAI repository instead:


    🏁 Script executed:

    #!/bin/bash
    # Check if the repository reference in the Colab badge is correct
    echo "Current reference: Dhivya-Bharathy/PraisonAI"
    echo "Expected reference might be: MervinPraison/PraisonAI"
    echo ""
    echo "Checking repository structure..."
    rg -n "github.com.*PraisonAI" --type md

    Length of output: 901


    Update Colab badge to point at the main repository

    The Colab badge in examples/cookbooks/AI_Enrollment_Counselor.ipynb is still referencing Dhivya-Bharathy/PraisonAI; it should use MervinPraison/PraisonAI to match the project’s primary repo.

    • File: examples/cookbooks/AI_Enrollment_Counselor.ipynb
    Line: ~27

    Suggested change:

    - "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Dhivya-Bharathy/PraisonAI/blob/main/examples/cookbooks/AI_Enrollment_Counselor.ipynb)\n"
    + "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/MervinPraison/PraisonAI/blob/main/examples/cookbooks/AI_Enrollment_Counselor.ipynb)\n"
    📝 Committable suggestion

    ‼️ IMPORTANT
    Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

    Suggested change
    "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Dhivya-Bharathy/PraisonAI/blob/main/examples/cookbooks/AI_Enrollment_Counselor.ipynb)\n"
    "[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/MervinPraison/PraisonAI/blob/main/examples/cookbooks/AI_Enrollment_Counselor.ipynb)\n"
    🤖 Prompt for AI Agents
    In examples/cookbooks/AI_Enrollment_Counselor.ipynb at line 27, update the Colab
    badge URL to replace the GitHub user path from 'Dhivya-Bharathy/PraisonAI' to
    'MervinPraison/PraisonAI' so it correctly points to the main project repository.
    

    @MervinPraison MervinPraison merged commit f51e964 into MervinPraison:main Jul 8, 2025
    38 of 39 checks passed
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

    Projects

    None yet

    Development

    Successfully merging this pull request may close these issues.

    2 participants