-
-
Notifications
You must be signed in to change notification settings - Fork 754
Add predictive maintenance notebook #602
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add predictive maintenance notebook #602
Conversation
WalkthroughA URL correction was made in the Colab badge link of the existing code analysis agent notebook. Additionally, a new Jupyter notebook was added demonstrating a predictive maintenance workflow with multiple AI agents performing sensor data collection, performance analysis, anomaly detection, failure prediction, and maintenance scheduling asynchronously. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Workflow
participant SensorMonitor
participant PerformanceAnalyzer
participant AnomalyDetector
participant FailurePredictor
participant MaintenanceScheduler
User->>Workflow: main()
Workflow->>SensorMonitor: collect_sensor_data()
Workflow->>PerformanceAnalyzer: analyze_performance()
Workflow->>AnomalyDetector: detect_anomalies(sensor_data, performance)
Workflow->>FailurePredictor: predict_failures(anomalies)
Workflow->>MaintenanceScheduler: schedule_maintenance(predictions)
MaintenanceScheduler-->>Workflow: maintenance schedule
Workflow-->>User: print results
Possibly related PRs
Suggested labels
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
✅ Files skipped from review due to trivial changes (1)
⏰ Context from checks skipped due to timeout of 90000ms (5)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @DhivyaBharathy-web, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
Hello everyone! Gemini here, providing a summary of this pull request. This PR, authored by @DhivyaBharathy-web, introduces two new example notebooks to the examples/cookbooks directory. The primary focus, as indicated by the title and description, is a notebook demonstrating a multi-agent workflow for predictive maintenance using the praisonaiagents library. Additionally, a second notebook showcasing a code analysis agent workflow is included.
Highlights
- Predictive Maintenance Workflow: Adds a new Jupyter notebook (
Predictive_Maintenance_Multi_Agent_Workflow.ipynb) that illustrates how to set up and run a multi-agent system for predictive maintenance. This workflow includes agents for data collection, performance analysis, anomaly detection, failure prediction, and maintenance scheduling, demonstrating task chaining and conditional execution. - Code Analysis Agent: Adds a new Jupyter notebook (
Code_Analysis_Agent.ipynb) demonstrating a single-agent workflow for comprehensive code analysis. It shows how to define an agent with a specific role and goal, set up a task with a structured output schema (using Pydantic), and use an external tool (gitingest) to provide code context for analysis. - Example Usage: Both notebooks provide step-by-step code and explanations, including dependency installation, API key setup, agent and task definitions, and execution examples, making them valuable resources for users learning to build agent-based workflows.
Changelog
Click here to see the changelog
- examples/cookbooks/Code_Analysis_Agent.ipynb
- Added a new notebook demonstrating a code analysis agent workflow.
- Includes setup for
praisonaiagentsandgitingest. - Defines a
Code Analysis Expertagent and a task for repository analysis. - Uses Pydantic model
CodeAnalysisReportfor structured output. - Provides a function
analyze_codeto ingest and process code. - Includes example output of a code analysis report.
- examples/cookbooks/Predictive_Maintenance_Multi_Agent_Workflow.ipynb
- Added a new notebook demonstrating a multi-agent predictive maintenance workflow.
- Includes setup for
praisonaiagents. - Defines helper functions to simulate sensor data collection, performance analysis, anomaly detection, failure prediction, and maintenance scheduling.
- Sets up five distinct agents:
Sensor Monitor,Performance Analyzer,Anomaly Detector,Failure Predictor,Maintenance Scheduler. - Defines tasks and chains them together to form a workflow.
- Demonstrates asynchronous workflow execution using
workflow.astart().
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Agents gather data,
Predicting future states,
Maintenance is planned.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
PR Code Suggestions ✨Explore these optional code suggestions:
|
||||||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
Thank you for this contribution! These notebooks provide valuable examples of multi-agent workflows using PraisonAI. The "Predictive Maintenance" notebook is a great demonstration of a practical application, and the "Code Analysis Agent" notebook offers a useful utility. I've reviewed the changes and have a few suggestions to enhance clarity and maintainability, particularly around API key handling and the presentation of example outputs.
Summary of Findings
- API Key Handling: Both notebooks use placeholder strings for API keys. It's recommended to add notes guiding users towards more secure methods of API key management (e.g., environment variables, input prompts, or secrets management tools) to prevent accidental key exposure.
- Clarity of Example Outputs: In both notebooks, cells that display output seem to use hardcoded/static example data. It would be beneficial to clearly label these as examples or ensure that live execution cells produce their own output to avoid user confusion.
- Pydantic Model Specificity (Code_Analysis_Agent.ipynb): In
Code_Analysis_Agent.ipynb, thebest_practicesfield inCodeAnalysisReportusesList[Dict[str, str]]. While functional, this could be made more specific by defining aBestPracticeItemPydantic model for better type safety and clarity if this were part of a larger library. I did not comment inline due to review settings. - Task Condition Clarity (Predictive_Maintenance_Multi_Agent_Workflow.ipynb): In
Predictive_Maintenance_Multi_Agent_Workflow.ipynb, theprediction_taskuses"normal": ""in its condition. The behavior of an empty string as a condition target might not be immediately obvious without consulting library documentation. I did not comment inline due to review settings. - Incomplete Example Output (Predictive_Maintenance_Multi_Agent_Workflow.ipynb): The static example output in
Predictive_Maintenance_Multi_Agent_Workflow.ipynbdetails results for only 2 out of the 5 defined tasks. If this is a truncated example, it might be helpful to note that. I did not comment inline due to review settings.
Merge Readiness
The notebooks are well-crafted and provide good examples. Addressing the suggestions regarding API key handling and the clarity of example outputs would further improve their usability and maintainability. I am unable to approve this pull request myself, but with these minor adjustments, the changes should be in good shape for merging after review and approval by others.
| "outputs": [], | ||
| "source": [ | ||
| "import os\n", | ||
| "os.environ['OPENAI_API_KEY'] = 'your_api_key_here'" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For improved security and usability, especially when notebooks are shared or version-controlled, it's good practice to guide users on securely managing API keys. While 'your_api_key_here' is a clear placeholder, consider adding a note in the preceding markdown cell suggesting methods like using environment files (.env with python-dotenv), Jupyter input prompts, or Colab secrets for handling sensitive information like API keys. This helps prevent accidental exposure of keys.
For example, you could suggest:
# Option 1: Using an input prompt (simple for notebooks)
# import os
# from getpass import getpass
# openai_api_key = getpass("Enter your OpenAI API key: ")
# os.environ['OPENAI_API_KEY'] = openai_api_key
# Option 2: Using python-dotenv (requires a .env file)
# from dotenv import load_dotenv
# load_dotenv()
# openai_api_key = os.getenv("OPENAI_API_KEY")
# if not openai_api_key:
# print("OPENAI_API_KEY not found in .env file or environment variables.")
# else:
# os.environ['OPENAI_API_KEY'] = openai_api_keyWhat are your thoughts on adding a small note about this in the markdown cell above this code block?
| "cell_type": "code", | ||
| "execution_count": 3, | ||
| "metadata": { | ||
| "colab": { | ||
| "base_uri": "https://localhost:8080/", | ||
| "height": 1000 | ||
| }, | ||
| "id": "5BzccaeFmvzg", | ||
| "outputId": "8b4cddae-b6b3-4a92-8a13-30694cd09c52" | ||
| }, | ||
| "outputs": [ | ||
| { | ||
| "output_type": "display_data", | ||
| "data": { | ||
| "text/plain": [ | ||
| "<IPython.core.display.Markdown object>" | ||
| ], | ||
| "text/markdown": "\n### 👤 Agent: Code Analysis Expert\n\n**Role**: Provides comprehensive code evaluation and recommendations \n**Backstory**: Expert in architecture, best practices, and technical assessment \n" | ||
| }, | ||
| "metadata": {} | ||
| }, | ||
| { | ||
| "output_type": "stream", | ||
| "name": "stdout", | ||
| "text": [ | ||
| "─── 📊 AGENT CODE ANALYSIS REPORT ───\n", | ||
| "{\n", | ||
| " \"overall_quality\": 85,\n", | ||
| " \"code_metrics\": [\n", | ||
| " {\n", | ||
| " \"category\": \"Architecture and Design\",\n", | ||
| " \"score\": 80,\n", | ||
| " \"findings\": [\n", | ||
| " \"Modular structure with clear separation of concerns.\",\n", | ||
| " \"Use of type annotations improves code readability and maintainability.\"\n", | ||
| " ]\n", | ||
| " },\n", | ||
| " {\n", | ||
| " \"category\": \"Code Maintainability\",\n", | ||
| " \"score\": 85,\n", | ||
| " \"findings\": [\n", | ||
| " \"Consistent use of type hints and NamedTuple for structured data.\",\n", | ||
| " \"Logical organization of functions and classes.\"\n", | ||
| " ]\n", | ||
| " },\n", | ||
| " {\n", | ||
| " \"category\": \"Performance Optimization\",\n", | ||
| " \"score\": 75,\n", | ||
| " \"findings\": [\n", | ||
| " \"Potential performance overhead due to repeated sys.stdout.write calls.\",\n", | ||
| " \"Efficient use of optional parameters to control execution flow.\"\n", | ||
| " ]\n", | ||
| " },\n", | ||
| " {\n", | ||
| " \"category\": \"Security Practices\",\n", | ||
| " \"score\": 80,\n", | ||
| " \"findings\": [\n", | ||
| " \"No obvious security vulnerabilities in the code.\",\n", | ||
| " \"Proper encapsulation of functionality.\"\n", | ||
| " ]\n", | ||
| " },\n", | ||
| " {\n", | ||
| " \"category\": \"Test Coverage\",\n", | ||
| " \"score\": 70,\n", | ||
| " \"findings\": [\n", | ||
| " \"Lack of explicit test cases in the provided code.\",\n", | ||
| " \"Use of type checking suggests some level of validation.\"\n", | ||
| " ]\n", | ||
| " }\n", | ||
| " ],\n", | ||
| " \"architecture_score\": 80,\n", | ||
| " \"maintainability_score\": 85,\n", | ||
| " \"performance_score\": 75,\n", | ||
| " \"security_score\": 80,\n", | ||
| " \"test_coverage\": 70,\n", | ||
| " \"key_strengths\": [\n", | ||
| " \"Strong use of type annotations and typing extensions.\",\n", | ||
| " \"Clear separation of CLI argument parsing and business logic.\"\n", | ||
| " ],\n", | ||
| " \"improvement_areas\": [\n", | ||
| " \"Increase test coverage to ensure robustness.\",\n", | ||
| " \"Optimize I/O operations to improve performance.\"\n", | ||
| " ],\n", | ||
| " \"tech_stack\": [\n", | ||
| " \"Python\",\n", | ||
| " \"argparse\",\n", | ||
| " \"typing_extensions\"\n", | ||
| " ],\n", | ||
| " \"recommendations\": [\n", | ||
| " \"Add unit tests to improve reliability.\",\n", | ||
| " \"Consider async I/O for improved performance in CLI tools.\"\n", | ||
| " ]\n", | ||
| "}\n" | ||
| ] | ||
| } | ||
| ], | ||
| "source": [ | ||
| "import json\n", | ||
| "from IPython.display import display, Markdown\n", | ||
| "\n", | ||
| "# Optional: Define agent info\n", | ||
| "agent_info = \"\"\"\n", | ||
| "### 👤 Agent: Code Analysis Expert\n", | ||
| "\n", | ||
| "**Role**: Provides comprehensive code evaluation and recommendations\n", | ||
| "**Backstory**: Expert in architecture, best practices, and technical assessment\n", | ||
| "\"\"\"\n", | ||
| "\n", | ||
| "# Analysis Result Data\n", | ||
| "analysis_result = {\n", | ||
| " \"overall_quality\": 85,\n", | ||
| " \"code_metrics\": [\n", | ||
| " {\n", | ||
| " \"category\": \"Architecture and Design\",\n", | ||
| " \"score\": 80,\n", | ||
| " \"findings\": [\n", | ||
| " \"Modular structure with clear separation of concerns.\",\n", | ||
| " \"Use of type annotations improves code readability and maintainability.\"\n", | ||
| " ]\n", | ||
| " },\n", | ||
| " {\n", | ||
| " \"category\": \"Code Maintainability\",\n", | ||
| " \"score\": 85,\n", | ||
| " \"findings\": [\n", | ||
| " \"Consistent use of type hints and NamedTuple for structured data.\",\n", | ||
| " \"Logical organization of functions and classes.\"\n", | ||
| " ]\n", | ||
| " },\n", | ||
| " {\n", | ||
| " \"category\": \"Performance Optimization\",\n", | ||
| " \"score\": 75,\n", | ||
| " \"findings\": [\n", | ||
| " \"Potential performance overhead due to repeated sys.stdout.write calls.\",\n", | ||
| " \"Efficient use of optional parameters to control execution flow.\"\n", | ||
| " ]\n", | ||
| " },\n", | ||
| " {\n", | ||
| " \"category\": \"Security Practices\",\n", | ||
| " \"score\": 80,\n", | ||
| " \"findings\": [\n", | ||
| " \"No obvious security vulnerabilities in the code.\",\n", | ||
| " \"Proper encapsulation of functionality.\"\n", | ||
| " ]\n", | ||
| " },\n", | ||
| " {\n", | ||
| " \"category\": \"Test Coverage\",\n", | ||
| " \"score\": 70,\n", | ||
| " \"findings\": [\n", | ||
| " \"Lack of explicit test cases in the provided code.\",\n", | ||
| " \"Use of type checking suggests some level of validation.\"\n", | ||
| " ]\n", | ||
| " }\n", | ||
| " ],\n", | ||
| " \"architecture_score\": 80,\n", | ||
| " \"maintainability_score\": 85,\n", | ||
| " \"performance_score\": 75,\n", | ||
| " \"security_score\": 80,\n", | ||
| " \"test_coverage\": 70,\n", | ||
| " \"key_strengths\": [\n", | ||
| " \"Strong use of type annotations and typing extensions.\",\n", | ||
| " \"Clear separation of CLI argument parsing and business logic.\"\n", | ||
| " ],\n", | ||
| " \"improvement_areas\": [\n", | ||
| " \"Increase test coverage to ensure robustness.\",\n", | ||
| " \"Optimize I/O operations to improve performance.\"\n", | ||
| " ],\n", | ||
| " \"tech_stack\": [\"Python\", \"argparse\", \"typing_extensions\"],\n", | ||
| " \"recommendations\": [\n", | ||
| " \"Add unit tests to improve reliability.\",\n", | ||
| " \"Consider async I/O for improved performance in CLI tools.\"\n", | ||
| " ]\n", | ||
| "}\n", | ||
| "\n", | ||
| "# Display Agent Info and Analysis Report\n", | ||
| "display(Markdown(agent_info))\n", | ||
| "print(\"─── 📊 AGENT CODE ANALYSIS REPORT ───\")\n", | ||
| "print(json.dumps(analysis_result, indent=4))\n" | ||
| ], | ||
| "id": "5BzccaeFmvzg" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This cell currently displays hardcoded example agent information and analysis results. While this is useful for showing the expected output format without requiring users to run the full analysis (which might need API keys or be time-consuming), it could be clearer to users that this is static example data, not the live output of the analyze_code function defined earlier.
Consider adding a comment at the beginning of this cell or in a preceding markdown cell to explicitly state that this is an illustrative example of the output. For instance:
# Note: The following demonstrates an example of the agent's output format.
# To run a live analysis, you would call the `analyze_code` function
# with a repository URL or local path, e.g.:
# live_analysis_report = analyze_code("https://github.com/user/repo")
# print(json.dumps(live_analysis_report.model_dump(), indent=4))This would help manage user expectations and guide them on how to obtain live results. What do you think about this clarification?
| "outputs": [], | ||
| "source": [ | ||
| "import os\n", | ||
| "os.environ['OPENAI_API_KEY'] = 'enter your api key'" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to the other notebook, it's good practice to guide users on securely managing API keys. The placeholder 'enter your api key' is clear, but a note in the markdown cell above could suggest more secure methods like environment variables, Jupyter input prompts, or Colab secrets.
For example, you could add a markdown note like:
"Note: For security, it's recommended to use environment variables or a secrets management tool for your API key rather than hardcoding it directly in the notebook, especially if you plan to share or version control it."
Would you consider adding such a note for users?
| "cell_type": "code", | ||
| "source": [ | ||
| "print(\"\"\"\n", | ||
| "[Starting Predictive Maintenance Workflow...\n", | ||
| "==================================================\n", | ||
| "╭─ Agent Info ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮\n", | ||
| "│ │\n", | ||
| "│ 👤 Agent: Sensor Monitor │\n", | ||
| "│ Role: Data Collection │\n", | ||
| "│ Tools: collect_sensor_data │\n", | ||
| "│ │\n", | ||
| "╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯\n", | ||
| "\n", | ||
| "╭─ Agent Info ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮\n", | ||
| "│ │\n", | ||
| "│ 👤 Agent: Performance Analyzer │\n", | ||
| "│ Role: Performance Analysis │\n", | ||
| "│ Tools: analyze_performance │\n", | ||
| "│ │\n", | ||
| "╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯\n", | ||
| "\n", | ||
| "[20:01:26] INFO [20:01:26] process.py:429 INFO Task schedule_maintenance has no next tasks, ending workflow process.py:429\n", | ||
| "\n", | ||
| "Maintenance Planning Results:\n", | ||
| "==================================================\n", | ||
| "\n", | ||
| "Task: 0\n", | ||
| "Result: The sensor readings you have collected are as follows:\n", | ||
| "\n", | ||
| "- **Temperature**: 86°F\n", | ||
| "- **Vibration**: 0.6 (units not specified, but typically measured in g-forces or mm/s)\n", | ||
| "- **Pressure**: 101 (units not specified, but typically measured in kPa or psi)\n", | ||
| "- **Noise Level**: 81 dB\n", | ||
| "\n", | ||
| "Here's a brief analysis of these readings:\n", | ||
| "\n", | ||
| "1. **Temperature**: At 86°F, the temperature is relatively warm. Depending on the context (e.g., industrial equipment, environmental monitoring), this could be within normal operating conditions or might require cooling measures if it's above the optimal range.\n", | ||
| "\n", | ||
| "2. **Vibration**: A vibration level of 0.6 is generally low, but the significance depends on the type of equipment being monitored. For precision machinery, even small vibrations can be critical, whereas for more robust equipment, this might be negligible.\n", | ||
| "\n", | ||
| "3. **Pressure**: A pressure reading of 101 is often within normal ranges for many systems, but without specific units or context, it's hard to determine if this is optimal or requires adjustment.\n", | ||
| "\n", | ||
| "4. **Noise Level**: At 81 dB, the noise level is relatively high. Prolonged exposure to noise levels above 85 dB can be harmful to hearing, so if this is a workplace environment, it might be necessary to implement noise reduction measures or provide hearing protection.\n", | ||
| "\n", | ||
| "Overall, these readings should be compared against the specific operational thresholds and safety standards relevant to the equipment or environment being monitored. If any values are outside of acceptable ranges, further investigation or corrective actions may be needed.\n", | ||
| "--------------------------------------------------\n", | ||
| "\n", | ||
| "Task: 1\n", | ||
| "Result: Based on the provided operational metrics, here's an analysis of the equipment performance:\n", | ||
| "\n", | ||
| "1. **Efficiency (94%)**:\n", | ||
| " - The equipment is operating at a high efficiency level, with 94% of the input being effectively converted into useful output. This suggests\n", | ||
| "that the equipment is well-maintained and optimized for performance. However, there is still a 6% margin for improvement, which could be addressed by identifying and minimizing any inefficiencies in the process.\n", | ||
| "\n", | ||
| "2. **Uptime (99%)**:\n", | ||
| " - The equipment has an excellent uptime rate of 99%, indicating that it is available and operational almost all the time. This is a strong indicator of reliability and suggests that downtime due to maintenance or unexpected failures is minimal. Maintaining this level of uptime should\n", | ||
| "be a priority, as it directly impacts productivity and operational continuity.\n", | ||
| "\n", | ||
| "3. **Output Quality (94%)**:\n", | ||
| " - The output quality is also at 94%, which is a positive sign that the equipment is producing high-quality products or results. However, similar to efficiency, there is room for improvement. Efforts could be made to identify any factors that might be affecting quality, such as calibration issues, material inconsistencies, or process deviations.\n", | ||
| "\n", | ||
| "**Overall Assessment**:\n", | ||
| "The equipment is performing well across all key metrics, with high efficiency, uptime, and output quality. To further enhance performance, focus should be placed on fine-tuning processes to close the small gaps in efficiency and quality. Regular maintenance, monitoring, and process optimization can help sustain and potentially improve these metrics.\n", | ||
| "--------------------------------------------------]\n", | ||
| "\"\"\")" | ||
| ], | ||
| "metadata": { | ||
| "colab": { | ||
| "base_uri": "https://localhost:8080/" | ||
| }, | ||
| "id": "4hrEJ5S6XpJ7", | ||
| "outputId": "899e677d-19d5-4e0a-d9ab-ebc945aeee1b" | ||
| }, | ||
| "id": "4hrEJ5S6XpJ7", | ||
| "execution_count": 4, | ||
| "outputs": [ | ||
| { | ||
| "output_type": "stream", | ||
| "name": "stdout", | ||
| "text": [ | ||
| "\n", | ||
| "[Starting Predictive Maintenance Workflow...\n", | ||
| "==================================================\n", | ||
| "╭─ Agent Info ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮\n", | ||
| "│ │\n", | ||
| "│ 👤 Agent: Sensor Monitor │\n", | ||
| "│ Role: Data Collection │\n", | ||
| "│ Tools: collect_sensor_data │\n", | ||
| "│ │\n", | ||
| "╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯\n", | ||
| "\n", | ||
| "╭─ Agent Info ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮\n", | ||
| "│ │\n", | ||
| "│ 👤 Agent: Performance Analyzer │\n", | ||
| "│ Role: Performance Analysis │\n", | ||
| "│ Tools: analyze_performance │\n", | ||
| "│ │\n", | ||
| "╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯\n", | ||
| "\n", | ||
| "[20:01:26] INFO [20:01:26] process.py:429 INFO Task schedule_maintenance has no next tasks, ending workflow process.py:429\n", | ||
| "\n", | ||
| "Maintenance Planning Results:\n", | ||
| "==================================================\n", | ||
| "\n", | ||
| "Task: 0\n", | ||
| "Result: The sensor readings you have collected are as follows:\n", | ||
| "\n", | ||
| "- **Temperature**: 86°F\n", | ||
| "- **Vibration**: 0.6 (units not specified, but typically measured in g-forces or mm/s)\n", | ||
| "- **Pressure**: 101 (units not specified, but typically measured in kPa or psi)\n", | ||
| "- **Noise Level**: 81 dB\n", | ||
| "\n", | ||
| "Here's a brief analysis of these readings:\n", | ||
| "\n", | ||
| "1. **Temperature**: At 86°F, the temperature is relatively warm. Depending on the context (e.g., industrial equipment, environmental monitoring), this could be within normal operating conditions or might require cooling measures if it's above the optimal range.\n", | ||
| "\n", | ||
| "2. **Vibration**: A vibration level of 0.6 is generally low, but the significance depends on the type of equipment being monitored. For precision machinery, even small vibrations can be critical, whereas for more robust equipment, this might be negligible.\n", | ||
| "\n", | ||
| "3. **Pressure**: A pressure reading of 101 is often within normal ranges for many systems, but without specific units or context, it's hard to determine if this is optimal or requires adjustment.\n", | ||
| "\n", | ||
| "4. **Noise Level**: At 81 dB, the noise level is relatively high. Prolonged exposure to noise levels above 85 dB can be harmful to hearing, so if this is a workplace environment, it might be necessary to implement noise reduction measures or provide hearing protection.\n", | ||
| "\n", | ||
| "Overall, these readings should be compared against the specific operational thresholds and safety standards relevant to the equipment or environment being monitored. If any values are outside of acceptable ranges, further investigation or corrective actions may be needed.\n", | ||
| "--------------------------------------------------\n", | ||
| "\n", | ||
| "Task: 1\n", | ||
| "Result: Based on the provided operational metrics, here's an analysis of the equipment performance:\n", | ||
| "\n", | ||
| "1. **Efficiency (94%)**:\n", | ||
| " - The equipment is operating at a high efficiency level, with 94% of the input being effectively converted into useful output. This suggests \n", | ||
| "that the equipment is well-maintained and optimized for performance. However, there is still a 6% margin for improvement, which could be addressed by identifying and minimizing any inefficiencies in the process.\n", | ||
| "\n", | ||
| "2. **Uptime (99%)**:\n", | ||
| " - The equipment has an excellent uptime rate of 99%, indicating that it is available and operational almost all the time. This is a strong indicator of reliability and suggests that downtime due to maintenance or unexpected failures is minimal. Maintaining this level of uptime should \n", | ||
| "be a priority, as it directly impacts productivity and operational continuity.\n", | ||
| "\n", | ||
| "3. **Output Quality (94%)**:\n", | ||
| " - The output quality is also at 94%, which is a positive sign that the equipment is producing high-quality products or results. However, similar to efficiency, there is room for improvement. Efforts could be made to identify any factors that might be affecting quality, such as calibration issues, material inconsistencies, or process deviations.\n", | ||
| "\n", | ||
| "**Overall Assessment**:\n", | ||
| "The equipment is performing well across all key metrics, with high efficiency, uptime, and output quality. To further enhance performance, focus should be placed on fine-tuning processes to close the small gaps in efficiency and quality. Regular maintenance, monitoring, and process optimization can help sustain and potentially improve these metrics.\n", | ||
| "--------------------------------------------------]\n", | ||
| "\n" | ||
| ] | ||
| } | ||
| ] | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This cell prints a static string representing an example output log. The actual asynchronous workflow execution (await main()) is in the preceding cell (Cell 14), which currently has execution_count: null. This might be confusing for users, as they might expect Cell 14 to produce output, and then see this static output in Cell 15.
To improve clarity, you could:
- Ensure Cell 14 (with
await main()) is executed and its output is visible when the notebook is saved/committed. - Alternatively, if Cell 15 is intended as a static example, clearly label it as such in a preceding markdown cell or a comment within the cell. For example: "Note: The following is an example of the output log you might expect from running the workflow."
This would help users understand whether they are seeing live results or a pre-defined example. What are your thoughts on this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Nitpick comments (2)
examples/cookbooks/Predictive_Maintenance_Multi_Agent_Workflow.ipynb (1)
129-137: Add input validation and improve anomaly detection logic.The function lacks input validation and could benefit from more robust threshold management.
def detect_anomalies(sensor_data: Dict, performance: Dict): + if not sensor_data or not performance: + raise ValueError("sensor_data and performance dictionaries cannot be empty") + anomalies = [] + + # Define thresholds as constants for maintainability + TEMP_THRESHOLD = 90 + VIBRATION_THRESHOLD = 1.2 + EFFICIENCY_THRESHOLD = 0.85 + - if sensor_data["temperature"] > 90: + if sensor_data.get("temperature", 0) > TEMP_THRESHOLD: anomalies.append({"type": "temperature_high", "severity": "critical"}) - if sensor_data["vibration"] > 1.2: + if sensor_data.get("vibration", 0) > VIBRATION_THRESHOLD: anomalies.append({"type": "vibration_excess", "severity": "warning"}) - if performance["efficiency"] < 0.85: + if performance.get("efficiency", 1.0) < EFFICIENCY_THRESHOLD: anomalies.append({"type": "efficiency_low", "severity": "warning"}) return anomaliesexamples/cookbooks/Code_Analysis_Agent.ipynb (1)
420-421: Remove unnecessary directory change command.The
%cd PraisonAIcommand appears unrelated to the code analysis example and may confuse users.- { - "cell_type": "code", - "source": [ - "%cd PraisonAI" - ], - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "mNZjZLedorlu", - "outputId": "82c96cdf-e2e5-4a9c-e633-f0fc225ad973" - }, - "id": "mNZjZLedorlu", - "execution_count": 5, - "outputs": [ - { - "output_type": "stream", - "name": "stdout", - "text": [ - "/content/PraisonAI\n" - ] - } - ] - }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
examples/cookbooks/Code_Analysis_Agent.ipynb(1 hunks)examples/cookbooks/Predictive_Maintenance_Multi_Agent_Workflow.ipynb(1 hunks)
🔇 Additional comments (3)
examples/cookbooks/Predictive_Maintenance_Multi_Agent_Workflow.ipynb (1)
182-192: Excellent agent architecture design.The multi-agent setup with clear role separation and task dependencies demonstrates good architectural principles. The async execution configuration and conditional branching show sophisticated workflow design.
examples/cookbooks/Code_Analysis_Agent.ipynb (2)
95-116: Excellent Pydantic model design.The data models are comprehensive and well-structured, covering all essential aspects of code analysis. The use of typed fields and nested models promotes type safety and clear data contracts.
138-168: Well-designed agent configuration.The agent role, goal, and backstory are clearly defined. The task description is comprehensive and the use of
output_pydanticensures structured responses.
| "import os\n", | ||
| "os.environ['OPENAI_API_KEY'] = 'enter your api key'" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve API key security handling.
The hardcoded API key placeholder poses a security risk and provides poor user experience.
-import os
-os.environ['OPENAI_API_KEY'] = 'enter your api key'
+import os
+import getpass
+
+# Secure API key input
+if 'OPENAI_API_KEY' not in os.environ:
+ os.environ['OPENAI_API_KEY'] = getpass.getpass("Enter your OpenAI API key: ")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "import os\n", | |
| "os.environ['OPENAI_API_KEY'] = 'enter your api key'" | |
| import os | |
| import getpass | |
| # Secure API key input | |
| if 'OPENAI_API_KEY' not in os.environ: | |
| os.environ['OPENAI_API_KEY'] = getpass.getpass("Enter your OpenAI API key: ") |
🤖 Prompt for AI Agents
In examples/cookbooks/Predictive_Maintenance_Multi_Agent_Workflow.ipynb around
lines 65 to 66, the API key is hardcoded as a placeholder string, which is
insecure and inconvenient. Replace the hardcoded key assignment with a method to
securely load the API key, such as reading it from an environment variable or a
secure configuration file, and update the code to handle missing keys gracefully
by prompting the user or raising an informative error.
| "def collect_sensor_data():\n", | ||
| " return {\n", | ||
| " \"temperature\": 75 + (int(time.time()) % 20),\n", | ||
| " \"vibration\": 0.5 + (int(time.time()) % 10) / 10,\n", | ||
| " \"pressure\": 100 + (int(time.time()) % 50),\n", | ||
| " \"noise_level\": 60 + (int(time.time()) % 30)\n", | ||
| " }\n", | ||
| "\n", | ||
| "def analyze_performance():\n", | ||
| " return {\n", | ||
| " \"efficiency\": 0.8 + (int(time.time()) % 20) / 100,\n", | ||
| " \"uptime\": 0.95 + (int(time.time()) % 5) / 100,\n", | ||
| " \"output_quality\": 0.9 + (int(time.time()) % 10) / 100\n", | ||
| " }\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Address non-deterministic behavior in sensor simulation.
Using time.time() for simulation creates non-reproducible results and potential edge cases in testing scenarios.
+import random
+
+# Set seed for reproducible results
+random.seed(42)
+
def collect_sensor_data():
return {
- "temperature": 75 + (int(time.time()) % 20),
- "vibration": 0.5 + (int(time.time()) % 10) / 10,
- "pressure": 100 + (int(time.time()) % 50),
- "noise_level": 60 + (int(time.time()) % 30)
+ "temperature": 75 + random.randint(0, 19),
+ "vibration": 0.5 + random.randint(0, 9) / 10,
+ "pressure": 100 + random.randint(0, 49),
+ "noise_level": 60 + random.randint(0, 29)
}
def analyze_performance():
return {
- "efficiency": 0.8 + (int(time.time()) % 20) / 100,
- "uptime": 0.95 + (int(time.time()) % 5) / 100,
- "output_quality": 0.9 + (int(time.time()) % 10) / 100
+ "efficiency": 0.8 + random.randint(0, 19) / 100,
+ "uptime": 0.95 + random.randint(0, 4) / 100,
+ "output_quality": 0.9 + random.randint(0, 9) / 100
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "def collect_sensor_data():\n", | |
| " return {\n", | |
| " \"temperature\": 75 + (int(time.time()) % 20),\n", | |
| " \"vibration\": 0.5 + (int(time.time()) % 10) / 10,\n", | |
| " \"pressure\": 100 + (int(time.time()) % 50),\n", | |
| " \"noise_level\": 60 + (int(time.time()) % 30)\n", | |
| " }\n", | |
| "\n", | |
| "def analyze_performance():\n", | |
| " return {\n", | |
| " \"efficiency\": 0.8 + (int(time.time()) % 20) / 100,\n", | |
| " \"uptime\": 0.95 + (int(time.time()) % 5) / 100,\n", | |
| " \"output_quality\": 0.9 + (int(time.time()) % 10) / 100\n", | |
| " }\n", | |
| import random | |
| # Set seed for reproducible results | |
| random.seed(42) | |
| def collect_sensor_data(): | |
| return { | |
| "temperature": 75 + random.randint(0, 19), | |
| "vibration": 0.5 + random.randint(0, 9) / 10, | |
| "pressure": 100 + random.randint(0, 49), | |
| "noise_level": 60 + random.randint(0, 29) | |
| } | |
| def analyze_performance(): | |
| return { | |
| "efficiency": 0.8 + random.randint(0, 19) / 100, | |
| "uptime": 0.95 + random.randint(0, 4) / 100, | |
| "output_quality": 0.9 + random.randint(0, 9) / 100 | |
| } |
🤖 Prompt for AI Agents
In examples/cookbooks/Predictive_Maintenance_Multi_Agent_Workflow.ipynb around
lines 114 to 127, the sensor data simulation uses time.time() which causes
non-deterministic and non-reproducible results. Replace the time-based random
values with a seeded random number generator or fixed values to ensure
consistent and reproducible outputs during testing and simulation.
| { | ||
| "cell_type": "markdown", | ||
| "source": [ | ||
| "[](https://colab.research.google.com/github/DhivyaBharathy-web/PraisonAI/blob/main/examples/cookbooks/Predictive_Maintenance_Multi-Agent_Workflow.ipynb)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix the Colab badge URL path.
The Colab badge URL contains a hyphen in the filename that doesn't match the actual filename.
-[](https://colab.research.google.com/github/DhivyaBharathy-web/PraisonAI/blob/main/examples/cookbooks/Predictive_Maintenance_Multi-Agent_Workflow.ipynb)
+[](https://colab.research.google.com/github/DhivyaBharathy-web/PraisonAI/blob/main/examples/cookbooks/Predictive_Maintenance_Multi_Agent_Workflow.ipynb)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "[](https://colab.research.google.com/github/DhivyaBharathy-web/PraisonAI/blob/main/examples/cookbooks/Predictive_Maintenance_Multi-Agent_Workflow.ipynb)" | |
| "[](https://colab.research.google.com/github/DhivyaBharathy-web/PraisonAI/blob/main/examples/cookbooks/Predictive_Maintenance_Multi_Agent_Workflow.ipynb)" |
🤖 Prompt for AI Agents
In examples/cookbooks/Predictive_Maintenance_Multi_Agent_Workflow.ipynb at line
18, the Colab badge URL contains a hyphen in the filename that does not match
the actual notebook filename. Update the URL to use the correct filename with
underscores instead of hyphens to ensure the badge links properly to the
notebook on Colab.
| "import os\n", | ||
| "os.environ['OPENAI_API_KEY'] = 'your_api_key_here'" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Improve API key security handling.
Same security issue as the other notebook - hardcoded API key placeholder.
-import os
-os.environ['OPENAI_API_KEY'] = 'your_api_key_here'
+import os
+import getpass
+
+if 'OPENAI_API_KEY' not in os.environ:
+ os.environ['OPENAI_API_KEY'] = getpass.getpass("Enter your OpenAI API key: ")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "import os\n", | |
| "os.environ['OPENAI_API_KEY'] = 'your_api_key_here'" | |
| import os | |
| import getpass | |
| if 'OPENAI_API_KEY' not in os.environ: | |
| os.environ['OPENAI_API_KEY'] = getpass.getpass("Enter your OpenAI API key: ") |
🤖 Prompt for AI Agents
In examples/cookbooks/Code_Analysis_Agent.ipynb around lines 66 to 67, the API
key is hardcoded as a placeholder string, which is insecure. Replace the
hardcoded API key assignment with code that reads the API key from a secure
environment variable or external configuration, such as using os.environ to
fetch the key from the system environment instead of embedding it directly in
the code.
| "def analyze_code(code_source: str) -> CodeAnalysisReport:\n", | ||
| " \"\"\"\n", | ||
| " Analyze code from directory path or GitHub URL\n", | ||
| " \"\"\"\n", | ||
| " # Ingest code content\n", | ||
| " summary, tree, content = ingest(code_source)\n", | ||
| "\n", | ||
| " # Concatenate context into structured format\n", | ||
| " context_text = f\"\"\"\n", | ||
| " CODE REPOSITORY ANALYSIS\n", | ||
| " =======================\n", | ||
| "\n", | ||
| " SUMMARY\n", | ||
| " -------\n", | ||
| " {summary}\n", | ||
| "\n", | ||
| " REPOSITORY STRUCTURE\n", | ||
| " -------------------\n", | ||
| " {tree}\n", | ||
| "\n", | ||
| " SOURCE CODE\n", | ||
| " -----------\n", | ||
| " {content}\n", | ||
| " \"\"\"\n", | ||
| "\n", | ||
| " # Initialize and run analysis\n", | ||
| " agents = PraisonAIAgents(\n", | ||
| " agents=[code_analyzer],\n", | ||
| " tasks=[code_analysis_task]\n", | ||
| " )\n", | ||
| "\n", | ||
| " return agents.start(context_text)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling and input validation.
The function lacks error handling for potential issues with code ingestion and analysis.
def analyze_code(code_source: str) -> CodeAnalysisReport:
"""
Analyze code from directory path or GitHub URL
"""
+ if not code_source or not code_source.strip():
+ raise ValueError("code_source cannot be empty")
+
+ try:
# Ingest code content
summary, tree, content = ingest(code_source)
+ except Exception as e:
+ raise RuntimeError(f"Failed to ingest code from {code_source}: {str(e)}")
# Concatenate context into structured format
context_text = f"""
CODE REPOSITORY ANALYSIS
=======================
SUMMARY
-------
{summary}
REPOSITORY STRUCTURE
-------------------
{tree}
SOURCE CODE
-----------
{content}
"""
+ try:
# Initialize and run analysis
agents = PraisonAIAgents(
agents=[code_analyzer],
tasks=[code_analysis_task]
)
return agents.start(context_text)
+ except Exception as e:
+ raise RuntimeError(f"Failed to analyze code: {str(e)}")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "def analyze_code(code_source: str) -> CodeAnalysisReport:\n", | |
| " \"\"\"\n", | |
| " Analyze code from directory path or GitHub URL\n", | |
| " \"\"\"\n", | |
| " # Ingest code content\n", | |
| " summary, tree, content = ingest(code_source)\n", | |
| "\n", | |
| " # Concatenate context into structured format\n", | |
| " context_text = f\"\"\"\n", | |
| " CODE REPOSITORY ANALYSIS\n", | |
| " =======================\n", | |
| "\n", | |
| " SUMMARY\n", | |
| " -------\n", | |
| " {summary}\n", | |
| "\n", | |
| " REPOSITORY STRUCTURE\n", | |
| " -------------------\n", | |
| " {tree}\n", | |
| "\n", | |
| " SOURCE CODE\n", | |
| " -----------\n", | |
| " {content}\n", | |
| " \"\"\"\n", | |
| "\n", | |
| " # Initialize and run analysis\n", | |
| " agents = PraisonAIAgents(\n", | |
| " agents=[code_analyzer],\n", | |
| " tasks=[code_analysis_task]\n", | |
| " )\n", | |
| "\n", | |
| " return agents.start(context_text)" | |
| def analyze_code(code_source: str) -> CodeAnalysisReport: | |
| """ | |
| Analyze code from directory path or GitHub URL | |
| """ | |
| if not code_source or not code_source.strip(): | |
| raise ValueError("code_source cannot be empty") | |
| try: | |
| # Ingest code content | |
| summary, tree, content = ingest(code_source) | |
| except Exception as e: | |
| raise RuntimeError(f"Failed to ingest code from {code_source}: {str(e)}") | |
| # Concatenate context into structured format | |
| context_text = f""" | |
| CODE REPOSITORY ANALYSIS | |
| ======================= | |
| SUMMARY | |
| ------- | |
| {summary} | |
| REPOSITORY STRUCTURE | |
| ------------------- | |
| {tree} | |
| SOURCE CODE | |
| ----------- | |
| {content} | |
| """ | |
| try: | |
| # Initialize and run analysis | |
| agents = PraisonAIAgents( | |
| agents=[code_analyzer], | |
| tasks=[code_analysis_task] | |
| ) | |
| return agents.start(context_text) | |
| except Exception as e: | |
| raise RuntimeError(f"Failed to analyze code: {str(e)}") |
🤖 Prompt for AI Agents
In examples/cookbooks/Code_Analysis_Agent.ipynb around lines 191 to 222, the
analyze_code function lacks error handling and input validation. Add checks to
validate the code_source input before processing, and wrap the ingest and
agents.start calls in try-except blocks to catch and handle exceptions
gracefully, returning meaningful error messages or handling failures
appropriately.
|
@BugBot run review these changes |
🚨 BugBot failed to runPlease try again with |
|
bugbot run |
🚨 BugBot failed to runPlease try again with |
|
bugbot run verbose=true |
|
bugbot is starting with request id serverGenReqId_44972a0d-747b-449c-81c1-60f3c82acb67 |
🚨 BugBot failed to runPlease try again with |
|
bugbot run verbose=true |
|
bugbot is starting with request id serverGenReqId_7b8ed83f-7886-4f71-bf04-5fe988c84a32 |
🚨 BugBot failed to runPlease try again with |
|
bugbot run verbose=true |
|
bugbot is starting with request id serverGenReqId_d55b89bd-f71a-49dc-9c97-a2cc0791922d |
🚨 BugBot failed to runRemote branch not found for this Pull Request. It may have been merged or deleted (requestId: serverGenReqId_d55b89bd-f71a-49dc-9c97-a2cc0791922d). |
|
@DhivyaBharathy-web There is a conflict |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #602 +/- ##
=======================================
Coverage 16.43% 16.43%
=======================================
Files 24 24
Lines 2160 2160
Branches 302 302
=======================================
Hits 355 355
Misses 1789 1789
Partials 16 16
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
User description
This notebook demonstrates a multi-agent workflow for predictive maintenance using AI techniques. It provides step-by-step code and explanations for building, evaluating, and visualizing predictive maintenance solutions.
PR Type
documentation, enhancement
Description
Add a new predictive maintenance multi-agent workflow notebook
Add a code analysis agent notebook for code quality assessment
Changes walkthrough 📝
Predictive_Maintenance_Multi_Agent_Workflow.ipynb
Add predictive maintenance multi-agent workflow notebookexamples/cookbooks/Predictive_Maintenance_Multi_Agent_Workflow.ipynb
agents
detection, etc.
Code_Analysis_Agent.ipynb
Add code analysis agent example notebookexamples/cookbooks/Code_Analysis_Agent.ipynb
Summary by CodeRabbit
New Features
Documentation