-
-
Notifications
You must be signed in to change notification settings - Fork 743
Add AI Health & Fitness Agent Notebook #751
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add AI Health & Fitness Agent Notebook #751
Conversation
|
Warning Rate limit exceeded@Dhivya-Bharathy has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 5 minutes and 5 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (1)
WalkthroughSix new Jupyter notebooks are introduced, each demonstrating a different AI-powered agent for specialized tasks: university admissions counseling, data analysis, health and fitness planning, meme creation, and local document question answering. Each notebook defines custom tool classes, sets up agent prompts and configurations, and implements interactive workflows tailored to their respective domains. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Notebook
participant AI_Agent
participant Tool1
participant Tool2
User->>Notebook: Provide input (e.g., upload file, ask question)
Notebook->>AI_Agent: Configure agent with prompt/tools
AI_Agent->>Tool1: Invoke tool (e.g., data analysis, document processing)
Tool1-->>AI_Agent: Return results
AI_Agent->>Tool2: (Optional) Invoke secondary tool
Tool2-->>AI_Agent: Return results
AI_Agent-->>Notebook: Generate response/output
Notebook-->>User: Display result/visualization/answer
Possibly related PRs
Suggested labels
Poem
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @Dhivya-Bharathy, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly enhances the examples/cookbooks collection by adding a suite of five new AI agent notebooks. These additions expand the framework's demonstrated capabilities across various domains, from automated university admissions and data analysis to personalized health guidance, creative content generation, and local document question-answering. The changes emphasize the platform's flexibility in integrating custom tools and supporting multiple LLM backends.
Highlights
- New AI Agent Notebooks: This pull request introduces five new Jupyter notebooks, each demonstrating a distinct AI agent with specialized functionalities within the
examples/cookbooksdirectory. - Diverse Agent Capabilities: New agents include an AI Enrollment Counselor for university admissions, an AI Data Analysis Agent for dataset insights, an AI Health & Fitness Agent for personalized recommendations, an AI Meme Creator Agent with browser automation, and a Local RAG Document QA Agent for local knowledge retrieval.
- Custom Tool Integration: Each agent showcases the integration of various custom tools, such as BMI/calorie calculators, data visualization/preprocessing, meme template search/caption generation, and document processing/vector database management, highlighting the extensibility of the PraisonAI framework.
- Multi-Model and Local LLM Support: The new examples demonstrate compatibility with different Large Language Models, including OpenAI, Google Gemini, and local models via Ollama, catering to diverse deployment and privacy needs.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
PR Code Suggestions ✨Explore these optional code suggestions:
|
|||||||||||||||||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a collection of impressive AI agent notebooks that showcase various capabilities of the PraisonAI framework. The examples are well-structured and cover interesting use cases. My main feedback focuses on improving security by removing hardcoded API key placeholders and using a secure method like Colab's secrets manager instead. I've also suggested some refactoring opportunities to improve code maintainability and some minor correctness and resource management issues have also been noted. Addressing these points will make them even more robust and user-friendly.
| "outputs": [], | ||
| "source": [ | ||
| "import os\n", | ||
| "os.environ[\"OPENAI_API_KEY\"] = \"sk-...\" # <-- Replace with your actual OpenAI API key" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Storing API keys directly in code, even as placeholders, poses a security risk. Use Colab's secrets manager for secure handling.1
os.environ["OPENAI_API_KEY"] = userdata.get('OPENAI_API_KEY')
Style Guide References
Footnotes
-
Always store API keys securely, and never commit them to version control. ↩
| ], | ||
| "source": [ | ||
| "import os\n", | ||
| "openai_key = \"sk-..\"\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| "import os\n", | ||
| "\n", | ||
| "# Set your Gemini API key\n", | ||
| "gemini_key = \"Enter your api key here\" # Get from https://aistudio.google.com/apikey\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Storing API keys directly in code, even as placeholders, is a security risk. Use Colab's secrets manager for secure handling.1
gemini_key = userdata.get("GOOGLE_API_KEY") # Get from https://aistudio.google.com/apikey
Style Guide References
Footnotes
-
Always store API keys securely, and never commit them to version control. ↩
| "openai_key = \"Enter you api key here\"\n", | ||
| "anthropic_key = \"Enter you api key here\" # Get from https://console.anthropic.com\n", | ||
| "deepseek_key = \"Enter you api key here\" # Get from https://platform.deepseek.com\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| " if chart_type == 'bar':\n", | ||
| " fig = px.bar(df, x=x_column, y=y_column, title=title, color_discrete_sequence=['#1f77b4'])\n", | ||
| " elif chart_type == 'line':\n", | ||
| " fig = px.line(df, x=x_column, y=y_column, title=title, color_discrete_sequence=['#2ca02c'])\n", | ||
| " elif chart_type == 'scatter':\n", | ||
| " fig = px.scatter(df, x=x_column, y=y_column, title=title, color_discrete_sequence=['#ff7f0e'])\n", | ||
| " elif chart_type == 'histogram':\n", | ||
| " fig = px.histogram(df, x=x_column, title=title, color_discrete_sequence=['#d62728'])\n", | ||
| " elif chart_type == 'box':\n", | ||
| " fig = px.box(df, x=x_column, y=y_column, title=title, color_discrete_sequence=['#9467bd'])\n", | ||
| " elif chart_type == 'pie':\n", | ||
| " fig = px.pie(df, values=y_column, names=x_column, title=title)\n", | ||
| " elif chart_type == 'heatmap':\n", | ||
| " corr_matrix = df.corr()\n", | ||
| " fig = px.imshow(corr_matrix, title=title, color_continuous_scale='RdBu')\n", | ||
| " elif chart_type == 'area':\n", | ||
| " fig = px.area(df, x=x_column, y=y_column, title=title, color_discrete_sequence=['#8c564b'])\n", | ||
| " else:\n", | ||
| " return \"Unsupported chart type\"\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| " print(\"\\n📊 Example Custom Visualization:\")\n", | ||
| " chart_type = 'bar'\n", | ||
| " x_column = df.columns[0]\n", | ||
| " y_column = df.columns[1] if df.columns[1] in df.select_dtypes(include=[np.number]).columns else df.columns[0]\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| " if category == \"Underweight\":\n", | ||
| " recommendations = [\n", | ||
| " \"Increase caloric intake with nutrient-dense foods\",\n", | ||
| " \"Include protein-rich foods in every meal\",\n", | ||
| " \"Consider strength training to build muscle mass\",\n", | ||
| " \"Eat frequent, smaller meals throughout the day\"\n", | ||
| " ]\n", | ||
| " elif category == \"Normal weight\":\n", | ||
| " recommendations = [\n", | ||
| " \"Maintain current healthy eating habits\",\n", | ||
| " \"Continue regular physical activity\",\n", | ||
| " \"Focus on balanced nutrition\",\n", | ||
| " \"Monitor weight regularly\"\n", | ||
| " ]\n", | ||
| " elif category == \"Overweight\":\n", | ||
| " recommendations = [\n", | ||
| " \"Create a moderate caloric deficit\",\n", | ||
| " \"Increase physical activity\",\n", | ||
| " \"Focus on whole foods and vegetables\",\n", | ||
| " \"Consider working with a nutritionist\"\n", | ||
| " ]\n", | ||
| " elif category == \"Obese\":\n", | ||
| " recommendations = [\n", | ||
| " \"Consult with healthcare professionals\",\n", | ||
| " \"Start with low-impact exercises\",\n", | ||
| " \"Focus on sustainable lifestyle changes\",\n", | ||
| " \"Consider medical weight loss programs\"\n", | ||
| " ]\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| " if sex.lower() == \"male\":\n", | ||
| " bmr = 10 * weight_kg + 6.25 * height_cm - 5 * age + 5\n", | ||
| " else:\n", | ||
| " bmr = 10 * weight_kg + 6.25 * height_cm - 5 * age - 161\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| "openai_key = \"sk-..\"\n", | ||
| "\n", | ||
| "os.environ[\"OPENAI_API_KEY\"] = openai_key\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| " \"\"\"Add documents to the vector database\"\"\"\n", | ||
| " try:\n", | ||
| " if ids is None:\n", | ||
| " ids = [f\"doc_{i}\" for i in range(len(documents))]\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default ID generation can lead to non-unique IDs if add_documents is called multiple times. Use uuid.uuid4() for more robust unique ID generation.1
ids = [str(uuid.uuid4()) for _ in range(len(documents))]
Style Guide References
Footnotes
-
Ensure unique IDs for database entries to prevent overwriting data. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 17
♻️ Duplicate comments (1)
examples/cookbooks/ai_data_analysis_agent.ipynb (1)
31-31: Fix Colab badge URL to point to the main repositorySame issue as the other notebook - the URL points to a personal fork instead of the main repository.
-[](https://colab.research.google.com/github/Dhivya-Bharathy/PraisonAI/blob/main/examples/cookbooks/ai_data_analysis_agent.ipynb) +[](https://colab.research.google.com/github/MervinPraison/PraisonAI/blob/main/examples/cookbooks/ai_data_analysis_agent.ipynb)
🧹 Nitpick comments (4)
examples/cookbooks/local_rag_document_qa_agent.ipynb (2)
49-49: Remove unused qdrant-client dependencyThe
qdrant-clientpackage is installed but never used in the notebook. Only ChromaDB is used for vector storage.-!pip install praisonai streamlit qdrant-client ollama pypdf PyPDF2 chromadb sentence-transformers +!pip install praisonai streamlit ollama pypdf PyPDF2 chromadb sentence-transformers
208-213: Confirm ChromaDB API compatibility and make storage path configurable with cleanupPlease ensure that your installed version of chromadb still exposes PersistentClient(path=…) and get_or_create_collection(…) as used below, and implement a cleanup mechanism if needed.
• File: examples/cookbooks/local_rag_document_qa_agent.ipynb
Lines: around theVectorDatabaseTool.__init__definitionSuggested revision:
import os class VectorDatabaseTool: def __init__( self, collection_name: str = "document_qa", db_path: str = "./chroma_db", ): self.collection_name = collection_name self.db_path = db_path # Ensure the directory exists os.makedirs(self.db_path, exist_ok=True) # Initialize ChromaDB client (verify this matches your version) self.client = chromadb.PersistentClient(path=self.db_path) self.collection = self.client.get_or_create_collection(name=self.collection_name) def cleanup(self): # Optional: implement client shutdown or directory removal self.client.shutdown() # shutil.rmtree(self.db_path, ignore_errors=True)examples/cookbooks/ai_data_analysis_agent.ipynb (2)
51-51: Remove unused duckdb dependencyThe
duckdbpackage is installed but never used in the notebook.-!pip install praisonai streamlit openai duckdb pandas numpy plotly matplotlib seaborn +!pip install praisonai streamlit openai pandas numpy plotly matplotlib seaborn
861-869: Properly handle BytesIO resourceBytesIO object should be properly closed after use.
# Create a file-like object with io.BytesIO(file_content) as file_obj: file_obj.name = file_name # Preprocess and save the uploaded file temp_path, columns, df, error = preprocess_tool.preprocess_file(file_obj)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
examples/cookbooks/AI_Enrollment_Counselor.ipynb(1 hunks)examples/cookbooks/ai_data_analysis_agent.ipynb(1 hunks)examples/cookbooks/ai_health_fitness_agent.ipynb(1 hunks)examples/cookbooks/ai_meme_creator_agent.ipynb(1 hunks)examples/cookbooks/local_rag_document_qa_agent.ipynb(1 hunks)
🧰 Additional context used
🧠 Learnings (3)
examples/cookbooks/AI_Enrollment_Counselor.ipynb (1)
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/agents/autoagents.ts : The 'AutoAgents' class in 'src/agents/autoagents.ts' should provide high-level convenience for automatically generating agent/task configuration from user instructions, using 'aisdk' to parse config.
examples/cookbooks/ai_meme_creator_agent.ipynb (1)
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-ts/.cursorrules:0-0
Timestamp: 2025-06-30T10:05:51.843Z
Learning: Applies to src/praisonai-ts/src/llm/llm.ts : The 'LLM' class in 'llm.ts' should wrap 'aisdk.generateText' calls for generating text responses.
examples/cookbooks/local_rag_document_qa_agent.ipynb (1)
Learnt from: CR
PR: MervinPraison/PraisonAI#0
File: src/praisonai-agents/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:06:17.673Z
Learning: Applies to src/praisonai-agents/praisonaiagents/{memory,knowledge}/**/*.py : Place memory-related implementations in `praisonaiagents/memory/` and knowledge/document processing in `praisonaiagents/knowledge/`.
🔇 Additional comments (3)
examples/cookbooks/ai_health_fitness_agent.ipynb (2)
117-224: Well-implemented BMI calculation with proper health categorizationThe BMI calculator correctly implements the standard formula and provides appropriate health recommendations for each category. The health risk assessments are medically sound.
947-952: Excellent inclusion of safety disclaimersThe safety considerations appropriately advise users to consult healthcare professionals and include important warnings about starting new fitness programs.
examples/cookbooks/AI_Enrollment_Counselor.ipynb (1)
400-427: Clean and effective agent implementationThe enrollment counselor agent is well-designed with clear role definition and a helpful wrapper function that handles both document checking and general queries. The example usage effectively demonstrates the agent's capabilities.
| " with tempfile.NamedTemporaryFile(delete=False, suffix=os.path.splitext(file_name)[1]) as temp_file:\n", | ||
| " temp_file.write(file_content)\n", | ||
| " temp_path = temp_file.name\n", | ||
| "\n", | ||
| " # Process document\n", | ||
| " doc_result = doc_tool.process_document(temp_path)\n", | ||
| "\n", | ||
| " if \"error\" not in doc_result:\n", | ||
| " processed_docs.append(doc_result)\n", | ||
| " print(f\"✅ Successfully processed {file_name}\")\n", | ||
| " print(f\" - Format: {doc_result.get('format', 'unknown')}\")\n", | ||
| " print(f\" - Text length: {len(doc_result['text'])} characters\")\n", | ||
| "\n", | ||
| " if 'pages' in doc_result:\n", | ||
| " print(f\" - Pages: {doc_result['pages']}\")\n", | ||
| " else:\n", | ||
| " print(f\"❌ Error processing {file_name}: {doc_result['error']}\")\n", | ||
| "\n", | ||
| " # Clean up temp file\n", | ||
| " os.unlink(temp_path)\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure proper cleanup of temporary files
Temporary files are created but not cleaned up in all code paths, which could lead to disk space issues.
Use context manager or try-finally for cleanup:
# Process each uploaded file
processed_docs = []
temp_files = [] # Track temp files for cleanup
try:
for file_name, file_content in uploaded.items():
print(f"\n📄 Processing: {file_name}")
# Save file temporarily
with tempfile.NamedTemporaryFile(delete=False, suffix=os.path.splitext(file_name)[1]) as temp_file:
temp_file.write(file_content)
temp_path = temp_file.name
temp_files.append(temp_path)
# Process document...
finally:
# Clean up all temp files
for temp_path in temp_files:
try:
os.unlink(temp_path)
except:
pass🤖 Prompt for AI Agents
In examples/cookbooks/local_rag_document_qa_agent.ipynb around lines 757 to 776,
temporary files are deleted only after processing each file, which risks leaving
files undeleted if an error occurs earlier. Refactor the code to track all
temporary file paths in a list and use a try-finally block around the entire
file processing loop to ensure all temporary files are deleted in the finally
clause, handling any exceptions during deletion gracefully.
| " # Here you would integrate with local LLM for answer generation\n", | ||
| " print(f\"\\n💡 AI Answer (using local LLM):\")\n", | ||
| " print(\"Based on the document content, here's what I found...\")\n", | ||
| " print(\"(This would be generated by the local LLM model)\")\n", | ||
| "\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing LLM integration for answer generation
The notebook claims to be a RAG agent but only implements retrieval (vector search) without generation. The placeholder comment indicates missing functionality.
The current implementation only retrieves relevant chunks but doesn't generate answers using an LLM. To complete the RAG pipeline, you need to integrate with Ollama or another LLM service.
Would you like me to provide an implementation that integrates with Ollama for local LLM inference?
🤖 Prompt for AI Agents
In examples/cookbooks/local_rag_document_qa_agent.ipynb around lines 842 to 846,
the code only prints placeholder text instead of generating answers using a
local LLM. To fix this, replace the placeholder prints with actual integration
code that sends the retrieved document chunks to a local LLM service like Ollama
for answer generation, then print the generated response. This completes the
retrieval-augmented generation pipeline by combining retrieval with LLM-based
answer generation.
| " except Exception as e:\n", | ||
| " return {\"error\": f\"Error processing document: {str(e)}\"}\n", | ||
| "\n", | ||
| " def _process_pdf(self, file_path: str) -> Dict[str, Any]:\n", | ||
| " \"\"\"Process PDF files\"\"\"\n", | ||
| " try:\n", | ||
| " with open(file_path, 'rb') as file:\n", | ||
| " pdf_reader = PyPDF2.PdfReader(file)\n", | ||
| " text = \"\"\n", | ||
| " for page in pdf_reader.pages:\n", | ||
| " text += page.extract_text() + \"\\n\"\n", | ||
| "\n", | ||
| " return {\n", | ||
| " \"text\": text,\n", | ||
| " \"pages\": len(pdf_reader.pages),\n", | ||
| " \"format\": \"pdf\",\n", | ||
| " \"file_path\": file_path\n", | ||
| " }\n", | ||
| " except Exception as e:\n", | ||
| " return {\"error\": f\"PDF processing error: {str(e)}\"}\n", | ||
| "\n", | ||
| " def _process_txt(self, file_path: str) -> Dict[str, Any]:\n", | ||
| " \"\"\"Process text files\"\"\"\n", | ||
| " try:\n", | ||
| " with open(file_path, 'r', encoding='utf-8') as file:\n", | ||
| " text = file.read()\n", | ||
| "\n", | ||
| " return {\n", | ||
| " \"text\": text,\n", | ||
| " \"format\": \"txt\",\n", | ||
| " \"file_path\": file_path\n", | ||
| " }\n", | ||
| " except Exception as e:\n", | ||
| " return {\"error\": f\"Text processing error: {str(e)}\"}\n", | ||
| "\n", | ||
| " def _process_md(self, file_path: str) -> Dict[str, Any]:\n", | ||
| " \"\"\"Process markdown files\"\"\"\n", | ||
| " try:\n", | ||
| " with open(file_path, 'r', encoding='utf-8') as file:\n", | ||
| " text = file.read()\n", | ||
| "\n", | ||
| " return {\n", | ||
| " \"text\": text,\n", | ||
| " \"format\": \"md\",\n", | ||
| " \"file_path\": file_path\n", | ||
| " }\n", | ||
| " except Exception as e:\n", | ||
| " return {\"error\": f\"Markdown processing error: {str(e)}\"}\n", | ||
| "\n", | ||
| " def _process_csv(self, file_path: str) -> Dict[str, Any]:\n", | ||
| " \"\"\"Process CSV files\"\"\"\n", | ||
| " try:\n", | ||
| " df = pd.read_csv(file_path)\n", | ||
| " text = df.to_string(index=False)\n", | ||
| "\n", | ||
| " return {\n", | ||
| " \"text\": text,\n", | ||
| " \"format\": \"csv\",\n", | ||
| " \"rows\": len(df),\n", | ||
| " \"columns\": len(df.columns),\n", | ||
| " \"file_path\": file_path\n", | ||
| " }\n", | ||
| " except Exception as e:\n", | ||
| " return {\"error\": f\"CSV processing error: {str(e)}\"}\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor duplicated error handling and improve exception specificity
The error handling code is duplicated across _process_txt, _process_md, and _process_csv methods. Also, catching all exceptions is too broad.
Refactor to reduce duplication:
def _process_text_file(self, file_path: str, format_name: str) -> Dict[str, Any]:
"""Generic text file processor"""
try:
with open(file_path, 'r', encoding='utf-8') as file:
text = file.read()
return {
"text": text,
"format": format_name,
"file_path": file_path
}
except FileNotFoundError:
return {"error": f"{format_name.upper()} file not found: {file_path}"}
except UnicodeDecodeError:
return {"error": f"{format_name.upper()} file encoding error"}
except Exception as e:
return {"error": f"{format_name.upper()} processing error: {str(e)}"}
def _process_txt(self, file_path: str) -> Dict[str, Any]:
"""Process text files"""
return self._process_text_file(file_path, "txt")
def _process_md(self, file_path: str) -> Dict[str, Any]:
"""Process markdown files"""
return self._process_text_file(file_path, "md")🤖 Prompt for AI Agents
In examples/cookbooks/local_rag_document_qa_agent.ipynb around lines 139 to 202,
the error handling in _process_txt, _process_md, and _process_csv is duplicated
and too broad by catching all exceptions. Refactor by creating a generic helper
method _process_text_file that handles file reading and specific exceptions like
FileNotFoundError and UnicodeDecodeError, returning clear error messages. Then
update _process_txt and _process_md to call this helper with the appropriate
format name. Keep _process_csv separate due to its different processing logic.
| "id": "K3HdvvEid5wP" | ||
| }, | ||
| "source": [ | ||
| "[](https://colab.research.google.com/github/Dhivya-Bharathy/PraisonAI/blob/main/examples/cookbooks/local_rag_document_qa_agent.ipynb)\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix Colab badge URL to point to the main repository
The Colab badge URL currently points to a personal fork (Dhivya-Bharathy/PraisonAI) instead of the main repository (MervinPraison/PraisonAI).
-[](https://colab.research.google.com/github/Dhivya-Bharathy/PraisonAI/blob/main/examples/cookbooks/local_rag_document_qa_agent.ipynb)
+[](https://colab.research.google.com/github/MervinPraison/PraisonAI/blob/main/examples/cookbooks/local_rag_document_qa_agent.ipynb)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "[](https://colab.research.google.com/github/Dhivya-Bharathy/PraisonAI/blob/main/examples/cookbooks/local_rag_document_qa_agent.ipynb)\n" | |
| "[](https://colab.research.google.com/github/MervinPraison/PraisonAI/blob/main/examples/cookbooks/local_rag_document_qa_agent.ipynb)\n" |
🤖 Prompt for AI Agents
In examples/cookbooks/local_rag_document_qa_agent.ipynb at line 29, update the
Colab badge URL to replace the personal fork path 'Dhivya-Bharathy/PraisonAI'
with the main repository path 'MervinPraison/PraisonAI' so the badge correctly
points to the main repo.
| "\n", | ||
| "temperature: 0.3\n", | ||
| "max_tokens: 4000\n", | ||
| "model: \"local-llama3.2\"\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Use standard model naming convention
The model name "local-llama3.2" is non-standard and might cause confusion. For Ollama, use the actual model names.
-model: "local-llama3.2"
+model: "llama3.2" # or "llama2", "mistral", etc. - use actual Ollama model namesCommittable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In examples/cookbooks/local_rag_document_qa_agent.ipynb at line 389, the model
name "local-llama3.2" is non-standard and may cause confusion. Replace this with
the correct and standard Ollama model name to ensure compatibility and clarity.
| "openai_key = \"Enter you api key here\"\n", | ||
| "anthropic_key = \"Enter you api key here\" # Get from https://console.anthropic.com\n", | ||
| "deepseek_key = \"Enter you api key here\" # Get from https://platform.deepseek.com\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix typo in API key placeholder text
There's a grammatical error in the placeholder text.
-openai_key = "Enter you api key here"
-anthropic_key = "Enter you api key here" # Get from https://console.anthropic.com
-deepseek_key = "Enter you api key here" # Get from https://platform.deepseek.com
+openai_key = "Enter your api key here"
+anthropic_key = "Enter your api key here" # Get from https://console.anthropic.com
+deepseek_key = "Enter your api key here" # Get from https://platform.deepseek.comCommittable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In examples/cookbooks/ai_meme_creator_agent.ipynb around lines 85 to 87, fix the
typo in the API key placeholder text by changing "Enter you api key here" to
"Enter your api key here" for all three key variables to correct the grammar.
| "id": "UBmUqUofoVrf" | ||
| }, | ||
| "source": [ | ||
| "[](https://colab.research.google.com/github/Dhivya-Bharathy/PraisonAI/blob/main/examples/cookbooks/ai_health_fitness_agent.ipynb)\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Update Colab badge URL to point to the main repository
The Colab badge currently points to a personal fork instead of the main repository.
-[](https://colab.research.google.com/github/Dhivya-Bharathy/PraisonAI/blob/main/examples/cookbooks/ai_health_fitness_agent.ipynb)
+[](https://colab.research.google.com/github/MervinPraison/PraisonAI/blob/main/examples/cookbooks/ai_health_fitness_agent.ipynb)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "[](https://colab.research.google.com/github/Dhivya-Bharathy/PraisonAI/blob/main/examples/cookbooks/ai_health_fitness_agent.ipynb)\n" | |
| "[](https://colab.research.google.com/github/MervinPraison/PraisonAI/blob/main/examples/cookbooks/ai_health_fitness_agent.ipynb)\n" |
🤖 Prompt for AI Agents
In examples/cookbooks/ai_health_fitness_agent.ipynb at line 29, update the Colab
badge URL so that it points to the main repository URL instead of the personal
fork. Replace the current GitHub link in the badge markdown with the correct
path to the main repository's notebook file.
| }, | ||
| "outputs": [], | ||
| "source": [ | ||
| "!pip install praisonai streamlit google-generativeai pandas numpy matplotlib seaborn" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Pin package versions for stability
Package versions should be pinned to ensure consistent behavior across different environments.
-!pip install praisonai streamlit google-generativeai pandas numpy matplotlib seaborn
+!pip install praisonai==1.0.0 streamlit==1.28.0 google-generativeai==0.3.0 pandas==2.0.0 numpy==1.24.0 matplotlib==3.7.0 seaborn==0.12.0Note: Update with appropriate compatible versions.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "!pip install praisonai streamlit google-generativeai pandas numpy matplotlib seaborn" | |
| !pip install praisonai==1.0.0 streamlit==1.28.0 google-generativeai==0.3.0 pandas==2.0.0 numpy==1.24.0 matplotlib==3.7.0 seaborn==0.12.0 |
🤖 Prompt for AI Agents
In examples/cookbooks/ai_health_fitness_agent.ipynb at line 49, the pip install
command installs packages without specifying versions, which can lead to
inconsistent behavior. Modify the command to pin each package to a specific,
compatible version by appending '==version_number' for each package to ensure
stability and reproducibility across environments.
| "outputs": [], | ||
| "source": [ | ||
| "import os\n", | ||
| "os.environ[\"OPENAI_API_KEY\"] = \"sk-...\" # <-- Replace with your actual OpenAI API key" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use a safer API key placeholder to prevent accidental exposure
The placeholder "sk-..." resembles OpenAI's actual API key format, which could lead to users accidentally committing real keys.
-os.environ["OPENAI_API_KEY"] = "sk-..." # <-- Replace with your actual OpenAI API key
+os.environ["OPENAI_API_KEY"] = "your-openai-api-key-here" # <-- Replace with your actual OpenAI API key📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "os.environ[\"OPENAI_API_KEY\"] = \"sk-...\" # <-- Replace with your actual OpenAI API key" | |
| os.environ["OPENAI_API_KEY"] = "your-openai-api-key-here" # <-- Replace with your actual OpenAI API key |
🤖 Prompt for AI Agents
In examples/cookbooks/AI_Enrollment_Counselor.ipynb at line 68, replace the API
key placeholder "sk-..." with a safer, non-realistic placeholder such as
"YOUR_OPENAI_API_KEY" to prevent accidental exposure or committing of real API
keys. This change helps users recognize it as a placeholder and avoid confusion
with actual keys.
| "id": "rL-UiB5NOspT" | ||
| }, | ||
| "source": [ | ||
| "[](https://colab.research.google.com/github/Dhivya-Bharathy/PraisonAI/blob/main/examples/cookbooks/AI_Enrollment_Counselor.ipynb)\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Update Colab badge URL to point to the main repository
Maintain consistency by pointing to the main repository.
-[](https://colab.research.google.com/github/Dhivya-Bharathy/PraisonAI/blob/main/examples/cookbooks/AI_Enrollment_Counselor.ipynb)
+[](https://colab.research.google.com/github/MervinPraison/PraisonAI/blob/main/examples/cookbooks/AI_Enrollment_Counselor.ipynb)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "[](https://colab.research.google.com/github/Dhivya-Bharathy/PraisonAI/blob/main/examples/cookbooks/AI_Enrollment_Counselor.ipynb)\n" | |
| "[](https://colab.research.google.com/github/MervinPraison/PraisonAI/blob/main/examples/cookbooks/AI_Enrollment_Counselor.ipynb)\n" |
🤖 Prompt for AI Agents
In examples/cookbooks/AI_Enrollment_Counselor.ipynb at line 27, update the Colab
badge URL to reference the main repository instead of the current fork. Change
the URL in the markdown link to point to the main repository's path for this
notebook to maintain consistency.
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #751 +/- ##
=======================================
Coverage 14.23% 14.23%
=======================================
Files 25 25
Lines 2571 2571
Branches 367 367
=======================================
Hits 366 366
Misses 2189 2189
Partials 16 16
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
User description
An AI-powered health and fitness agent that provides personalized dietary and exercise recommendations based on user profiles.
Features include BMI calculation, calorie analysis, macronutrient breakdown, personalized workout plans, and dietary preference support (vegetarian, keto, gluten-free).
Built with PraisonAI, considers age, weight, height, activity level, and fitness goals to create comprehensive health plans with safety recommendations.
PR Type
Enhancement
Description
• Added comprehensive AI Health & Fitness Agent notebook with BMI calculation, calorie analysis, and personalized exercise recommendations
• Implemented AI Data Analysis Agent with advanced data visualization tools supporting multiple chart types and statistical analysis
• Created Local RAG Document Q&A Agent with vector database integration and multi-format document processing capabilities
• Added AI Meme Creator Agent with browser automation and multi-model support for meme generation
• Developed AI Enrollment Counselor Agent for university admissions automation with document validation
• All agents built using PraisonAI framework with interactive Jupyter notebook interfaces
• Includes safety considerations, progress tracking, and comprehensive tool implementations
Changes walkthrough 📝
ai_data_analysis_agent.ipynb
AI Data Analysis Agent Jupyter Notebook Implementationexamples/cookbooks/ai_data_analysis_agent.ipynb
• Added a complete Jupyter notebook implementing an AI-powered data
analysis agent
• Includes comprehensive data visualization tools with
support for multiple chart types (bar, line, scatter, histogram, box,
pie, heatmap, area)
• Implements data preprocessing capabilities for
CSV/Excel files with automatic type conversion and cleaning
• Features
statistical analysis tools for descriptive statistics, correlation
analysis, outlier detection, and trend analysis
• Provides interactive
Google Colab interface with file upload, automated insights
generation, and custom visualization options
ai_health_fitness_agent.ipynb
AI Health & Fitness Agent Notebook Implementationexamples/cookbooks/ai_health_fitness_agent.ipynb
• Added comprehensive AI health and fitness agent notebook with BMI
calculation, calorie analysis, and exercise recommendations
•
Implemented custom tools for BMI calculation, calorie/macro tracking,
and personalized exercise plan generation
• Created interactive
interface for user profile input and personalized health
recommendations
• Included safety considerations, progress tracking,
and sample meal plans with dietary preference support
local_rag_document_qa_agent.ipynb
Local RAG Document Q&A Agent Implementationexamples/cookbooks/local_rag_document_qa_agent.ipynb
• Added local RAG document Q&A agent notebook with vector database
integration and document processing
• Implemented tools for processing
multiple document formats (PDF, TXT, MD, CSV) and text chunking
•
Created ChromaDB-based vector storage with similarity search
capabilities for document retrieval
• Built interactive Q&A system
with source attribution and context-aware responses using local LLM
models
ai_meme_creator_agent.ipynb
AI Meme Creator Agent Notebook Implementationexamples/cookbooks/ai_meme_creator_agent.ipynb
• Added a complete Jupyter notebook for an AI Meme Creator Agent with
browser automation capabilities
• Implemented custom tools for meme
template search, caption generation, and meme validation
• Integrated
multi-model support (OpenAI, Claude, Deepseek) with browser automation
using
browser-use• Provided comprehensive meme generation workflow
with quality assessment and manual fallback instructions
AI_Enrollment_Counselor.ipynb
AI Enrollment Counselor Agent Notebookexamples/cookbooks/AI_Enrollment_Counselor.ipynb
• Created a Jupyter notebook for an AI Enrollment Counselor agent for
university admissions automation
• Implemented document validation
functionality to check application completeness
• Added interactive
examples for document checking and general admissions inquiries
•
Integrated PraisonAI Agents framework for intelligent counseling
responses
Summary by CodeRabbit