[critical] [feat] Stand-alone core agent micro-service in Python#13
[critical] [feat] Stand-alone core agent micro-service in Python#13
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 4 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
This PR is being reviewed by Cursor Bugbot
Details
You are on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle.
To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.
| "Maintain a good working relationship", | ||
| "Show empathy for her stressful situation" | ||
| ], | ||
| "personaSlug": "sarah-jenkins-overwhelmed-project-manager", |
There was a problem hiding this comment.
Persona slug mismatch breaks Sarah Jenkins simulation
High Severity
The personaSlug in the "Saying No to Extra Work" simulation is "sarah-jenkins-overwhelmed-project-manager", but the actual persona slug in personas.json is "sarah-jenkins-overwhelmed-pm". This mismatch causes load_persona to raise a ValueError whenever anyone tries to start this simulation, completely breaking it.
Additional Locations (1)
| ) | ||
|
|
||
| return { | ||
| "should_send_proactive": starts, |
There was a problem hiding this comment.
startsConversation: "sometimes" lacks randomization, always truthy
Medium Severity
David Miller's startsConversation is "sometimes" (a non-empty string), but all code paths treat it as a plain truthy value. In create_initial_state, it always sets proactive_trigger="start". In check_proactive_trigger, should_send_proactive is set to the string "sometimes" instead of a boolean, which is always truthy. The "sometimes" intent—randomized conversation opening—is never implemented.
Additional Locations (2)
| input_summary=input_summary, | ||
| output_summary=output_summary, | ||
| )) | ||
| return trace |
There was a problem hiding this comment.
Identical _add_trace helper duplicated across four files
Low Severity
The _add_trace function is copy-pasted identically in analysis.py, conversation.py, evaluation.py, and proactive.py. This duplication increases maintenance burden—any fix to the tracing logic needs to be applied in four places. It belongs in a shared utility module (e.g., the parent nodes package or a common utils.py).
Additional Locations (2)
| # Create virtual environment and install dependencies | ||
| RUN uv venv /app/.venv | ||
| ENV PATH="/app/.venv/bin:$PATH" | ||
| RUN uv pip install --no-cache . |
There was a problem hiding this comment.
Docker builder stage fails without source code
High Severity
The builder stage copies only pyproject.toml then runs uv pip install --no-cache ., which invokes the hatchling build backend. Hatchling requires the src/careersim_agent directory (specified via packages in pyproject.toml) to build the wheel, but source code isn't copied until the runtime stage. This causes the Docker build to fail with a hatchling error about missing package files. The intent was to pre-install dependencies for layer caching, but uv pip install . tries to build the whole project, not just its dependencies.


Note
Medium Risk
Adds an entire new deployable service that calls external LLM APIs and ships large local-model dependencies; main risks are operational (resource usage, model downloads, config correctness) rather than changes to existing production code paths.
Overview
Introduces a new standalone Python core agent microservice (
agent/) that runs a LangGraph-driven conversation agent behind a Gradio UI and exposes programmatic endpoints (/api_*) for the backend to drive sessions viagradio_client(start/send/trigger/get state/end).Adds persona + simulation content in JSON, goal evaluation using local HuggingFace transformers (sentiment/emotion + zero-shot classification) with per-turn goal progress tracking and node execution tracing, plus proactive messaging (start/inactivity/followup bursts) to continue conversations.
Packages and deploys the service with
pyproject.toml, environment-based configuration (.env.example), Docker build/runtime setup, a simulation test harness, and basic pytest coverage.Written by Cursor Bugbot for commit 3cc7d7a. This will update automatically on new commits. Configure here.