-
-
Notifications
You must be signed in to change notification settings - Fork 755
Add activity feed API #1081
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add activity feed API #1081
Conversation
Adds missing callback execution to tool-based responses and JSON/Pydantic output paths in the achat() method to ensure consistent task metadata propagation across all execution flows. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
- Remove dependency on verbose=True for streaming display function - Allow stream=True to work with verbose=False for real-time output - Maintain backward compatibility with existing verbose=True usage - Fixes issue where Gemini/Google Vertex AI streaming was buffered Resolves #956 Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
…ation - Enhanced test file with comprehensive behavioral validation - Added proper module import handling and mock-based testing - Extracted duplicated logic into _handle_ollama_sequential_logic helper - Eliminated 18-line code duplication between sync and async methods - Improved maintainability while preserving backward compatibility Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
- Enhanced _chat_completion method signature to accept task metadata parameters - Updated all calls to _chat_completion to pass task context - Fixed NameError where task_name, task_description, task_id were undefined - Updated _apply_guardrail_with_retry to propagate task metadata - Ensures complete task metadata propagation in sync chat execution path Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
…2244 fix: resolve task_name undefined error in async execution
- Removed unused custom_display_fn function that was defined but never called - Cleanup after streaming fix to avoid confusion - No functional changes, streaming fix remains intact Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
…lay behavior - Added _final_display_shown = False reset in achat() method to match chat() method - Ensures proper display flag management for both sync and async agent interactions - Maintains consistency across all agent conversation entry points 🤖 Generated with [Claude Code](https://claude.ai/code) Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
…2252 fix: accumulate tool results across iterations in Ollama sequential execution
… handling - Fix custom LLM streaming (Gemini/Vertex AI) in llm.py by removing verbose condition - Improve console parameter consistency when streaming is enabled - Ensures streaming works for stream=True regardless of verbose setting - Maintains full backward compatibility Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
…ror during Python shutdown - Change atexit handler from flush() to shutdown() in telemetry/__init__.py - This ensures PostHog aiohttp sessions are properly closed before Python shutdown - Prevents ''ImportError: sys.meta_path is None, Python is likely shutting down'' error - Maintains backward compatibility with existing functionality Fixes #963 Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
…2316 fix: enable real-time streaming regardless of verbose setting
- Add early stopping logic for Ollama after first successful tool execution - Enhance tool summary generation to create more natural, OpenAI-like responses - Apply fix to both sync and async execution paths - Prevent multiple tool calls when execution is complete - Maintains full backward compatibility Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
…2304 fix: ensure consistent Task/Response formatting across all LLM providers
…2334 fix: properly shutdown telemetry aiohttp sessions to prevent ImportError during Python shutdown
…ool execution - Only trigger early stopping when no new tool calls are present in current response - Prevents premature termination during sequential tool calls like get_stock_price -> multiply - Maintains backward compatibility while fixing issue #964 - Applied to both sync and async execution paths 🤖 Generated with [Claude Code](https://claude.ai/code) Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
Ensures TelemetryCollector.stop() calls shutdown() instead of just flush() for consistency with the main atexit handler and proper PostHog session cleanup. Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
…l execution Fixes critical issues identified in PR review: - Move early stopping logic from inside ''if tool_calls:'' block to ''else:'' block - Fix logic placement in both sync and async execution paths - Remove unreachable code in _generate_ollama_tool_summary() - Ensure early stopping triggers correctly when Ollama returns empty responses after tool execution This ensures the fix properly handles the Ollama edge case where it returns empty responses after successful tool execution, preventing infinite loops while allowing proper sequential tool calls. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
…2340 fix: improve Ollama sequential tool execution to prevent redundant calls
fix: make TelemetryCollector.stop() consistent with shutdown behavior
…tion - Add provider-aware parameter filtering for image generation - Filter out unsupported ''style'' parameter for vertex_ai models - Maintain backward compatibility for other providers (OpenAI, Gemini) - Add drop_params=True safety mechanism - Resolves litellm.UnsupportedParamsError for gemini-2.0-flash-preview-image-generation Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
…ization feat: Add comprehensive provider examples for all major LiteLLM providers with clean API structure
…y system - Add proper cleanup in Agent.start() and Agent.astart() methods using try-finally blocks - Implement _cleanup_telemetry() method to ensure telemetry system shutdown - Prevent hanging after agent execution by properly shutting down background threads - Add test files to verify the termination fix works correctly Fixes #987 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Add streaming generator support to Agent.start() method - Implement _start_stream() method for streaming logic - Add _chat_stream() method to route streaming to appropriate handlers - Add _custom_llm_stream() for custom LLM streaming support - Add _openai_stream() for OpenAI client streaming support - Add chat_completion_with_tools_stream() method to OpenAI client - Maintain backward compatibility for existing code - Add proper error handling and chat history management Fixes #981 Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
- Fixed test_agent_with_knowledge_config and test_ollama_rag_integration - Tests now properly check for lazy loading behavior instead of expecting immediate knowledge availability - Verifies knowledge sources are stored and manually triggers processing before checking knowledge availability - Preserves performance optimization of lazy loading while ensuring correct test behavior - All RAG integration tests now pass (11/11) 🤖 Generated with [Claude Code](https://claude.ai/code) Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
…1-1148 fix: Fix RAG integration tests to respect lazy loading pattern
- Add xml_tool_format configuration parameter (auto/true/false) - Replace static Qwen detection with dynamic _supports_xml_tool_format() method - Add fallback XML detection for any model outputting <tool_call> tags - Maintain backward compatibility with existing Qwen auto-detection - Support manual XML format enabling for any model Fixes #1077 Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
- Fix type inconsistency in xml_tool_format handling (now supports True/''true''/''True'') - Simplify _is_qwen_provider() by removing redundant logic - Add proper XML parsing with ElementTree as primary method - Improve regex fallback pattern for nested JSON in XML tags - Add graceful error handling for malformed XML - Import xml.etree.ElementTree at module level for better performance Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
…5-0109 feat: make XML tool call parsing dynamic and configurable
…crease default temperature to 1.0
… parameters and enforce temperature for gpt-5 models
…rfiles and project files
WalkthroughIntroduces a new Express HTTP server at graphite-demo/server.js. It starts on port 3000, defines an in-memory activityFeed with three items, and exposes a single GET /feed endpoint that returns the array as JSON. Logs a startup message. No additional routes, errors, or exports. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Client
participant App as Express App
participant Route as GET /feed Handler
Client->>App: HTTP GET /feed
App->>Route: Route dispatch
Note over Route: Read in-memory activityFeed
Route-->>App: res.json(activityFeed)
App-->>Client: 200 OK + JSON array
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
|
@claude review this pull request and do a detailed analysis and fix this if the existing code doesn't have the solution implemented. Making sure it has backward compatibility, no existing features removed. After making those changes, again review the applied changes. Use @web to search if you dont know any information or to find the latest documentation or to find the latest version. Run the code if you think you need to run it to test it. Minimal code change to start with if required any changes. MAINLY IT SHOULD NOT IMPACT ON THE CURRENT SPEED OF EXECUTION of existing features, not increasing the overhead. |
|
Claude encountered an error —— View job I'll analyze this and get back to you. |
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
PR Code Suggestions ✨Explore these optional code suggestions:
|
|||||||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (4)
graphite-demo/server.js (4)
1-3: Use env-configurable port and hide Express fingerprint
- Binding to a fixed port can clash with other services or PaaS environments that inject PORT.
- Disabling the X-Powered-By header avoids leaking the Express stack in responses.
Apply this diff:
const express = require('express'); const app = express(); -const port = 3000; +// Hide framework signature +app.disable('x-powered-by'); +// Prefer env PORT with a dev fallback +const port = process.env.PORT || 3000;
24-26: Add minimal pagination + no-store cache header (backward-compatible)Keeps default behavior the same but allows clients to request a limited number of items and ensures clients don’t cache dynamic feeds.
-app.get('/feed', (req, res) => { - res.json(activityFeed); -}); +app.get('/feed', (req, res) => { + const limitParam = Number.parseInt(req.query.limit ?? '', 10); + const limit = + Number.isFinite(limitParam) && limitParam > 0 + ? Math.min(limitParam, activityFeed.length) + : activityFeed.length; + const items = activityFeed.slice(0, limit); + res.set('Cache-Control', 'no-store'); + res.json(items); +});
28-30: Guard server startup and export app for tests/reuseWrapping listen avoids side-effects on import and enables integration tests or embedding in a larger app.
-app.listen(port, () => { - console.log(`Server running on port ${port}`); -}); +if (require.main === module) { + app.listen(port, () => { + console.log(`Server running on port ${port}`); + }); +} + +module.exports = app;
5-22: Optional: enrich feed items with metadata (type, createdAt) for clientsAdding a minimal schema (e.g., type and createdAt ISO timestamp) helps clients sort/filter and render appropriately without guessing. Not required for this PR, but consider for follow-ups if this is more than a demo.
I can provide a follow-up patch adding type/createdAt while preserving current fields and order.
Do you want me to draft that?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
graphite-demo/server.js(1 hunks)
| const express = require('express'); | ||
| const app = express(); | ||
| const port = 3000; | ||
|
|
||
| // Fake data for the activity feed | ||
| const activityFeed = [ | ||
| { | ||
| id: 1000, | ||
| title: 'New Photo Uploaded', | ||
| body: 'Alice uploaded a new photo to her album.' | ||
| }, | ||
| { | ||
| id: 2000, | ||
| title: 'Comment on Post', | ||
| body: "Bob commented on Charlie's post." | ||
| }, | ||
| { | ||
| id: 13, | ||
| title: 'Status Update', | ||
| body: 'Charlie updated their status: "Excited about the new project!"' | ||
| } | ||
| ]; | ||
|
|
||
| app.get('/feed', (req, res) => { | ||
| res.json(activityFeed); | ||
| }); | ||
|
|
||
| app.listen(port, () => { | ||
| console.log(`Server running on port ${port}`); | ||
| }); No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify repo dependencies and potential port conflicts
Ensure express is declared and check for other listeners on port 3000 to avoid collisions when running the demo.
Run this script from the repo root:
🏁 Script executed:
#!/usr/bin/env bash
set -euo pipefail
echo "== Checking for package.json files and express dependency =="
fd -a -t f -g package.json | while read -r pkg; do
echo "---- $pkg ----"
if jq -e '.dependencies.express // .devDependencies.express' "$pkg" >/dev/null 2>&1; then
echo "express declared:" "$(jq -r '.dependencies.express // .devDependencies.express' "$pkg")"
else
echo "express not declared in this package.json"
fi
done
echo
echo "== Searching for other servers that call listen(...) or use port 3000 =="
# Look for listen() calls (avoid node_modules)
rg -n -C2 -g '!**/node_modules/**' -P '\blisten\s*\(\s*(process\.env\.PORT|\d+)' || true
# Explicit port 3000 occurrences (outside node_modules)
rg -n -C2 -g '!**/node_modules/**' -P '\b3000\b' || true
echo
echo "== If express is missing, install with =="
echo "npm install express"Length of output: 58521
Ensure Express is declared and make the port configurable to avoid collisions
• Add an explicit package.json in graphite-demo/ (or update the existing one) with Express as a dependency, e.g.:
{
"name": "graphite-demo",
"version": "1.0.0",
"dependencies": {
"express": "^4.18.2"
}
}• In graphite-demo/server.js, replace the hard-coded port with an environment-override:
- const port = 3000;
+ const port = process.env.PORT || 3000;• Note that other components (e.g. Langtrace agents in src/praisonai-agents) default to localhost:3000. If you run both services locally, pick different ports or override via PORT to prevent conflicts.
🤖 Prompt for AI Agents
In graphite-demo/server.js lines 1 to 30, make the port configurable and ensure
express is declared properly: keep the existing require('express') but replace
the hard-coded port with a value read from process.env.PORT (e.g. const port =
process.env.PORT || 3000) so callers can override it to avoid collisions;
additionally add or update graphite-demo/package.json at the project root to
declare express as a dependency (name, version and "express": "^4.18.2") so npm
install will install Express.

PR Type
Enhancement
Description
Add Express.js server with activity feed endpoint
Create
/feedAPI route returning JSON dataInclude sample activity data for testing
Diagram Walkthrough
File Walkthrough
server.js
Create Express server with activity feed APIgraphite-demo/server.js
/feedGET endpoint returning activity dataSummary by CodeRabbit