Ralph/chore/update.from.main.dec.10#1499
Conversation
Release 0.37.1
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
…y browsers or firewalls (#1492)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
🦋 Changeset detectedLatest commit: a0210ed The changes in this PR will be included in the next version bump. Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
WalkthroughIntroduces a comprehensive refactoring that replaces direct workflow orchestration with a new Changes
Sequence DiagramsequenceDiagram
participant CLI as CLI/MCP App
participant Client as OAuth Client
participant Backend as Backend API
participant Crypto as Crypto Module
participant Supabase as Supabase
CLI->>Crypto: generateKeyPair()
activate Crypto
Crypto-->>CLI: {publicKey, privateKey}
deactivate Crypto
CLI->>Backend: POST /api/auth/cli/start<br/>(publicKey)
activate Backend
Backend-->>CLI: {flow_id, verification_url,<br/>expires_at, poll_interval}
deactivate Backend
CLI->>Client: Open browser to<br/>verification_url
activate Client
Client->>Backend: User authenticates
Client-->>CLI: (awaiting poll)
deactivate Client
loop Poll for Completion
CLI->>Backend: GET /api/auth/cli/status<br/>?flow_id=...
activate Backend
Backend-->>CLI: {status, encrypted_tokens}<br/>(when ready)
deactivate Backend
end
CLI->>Crypto: decryptTokens(encrypted_tokens,<br/>privateKey)
activate Crypto
Crypto->>Crypto: RSA-OAEP decrypt AES key
Crypto->>Crypto: AES-256-GCM decrypt tokens
Crypto-->>CLI: {access_token, refresh_token,<br/>expires_in, user_id}
deactivate Crypto
CLI->>Supabase: setSession({...tokens})
activate Supabase
Supabase-->>CLI: Session established
deactivate Supabase
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Areas requiring extra attention:
Possibly related PRs
Pre-merge checks and finishing touches❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (9)
packages/tm-core/src/modules/workflow/orchestrators/workflow-orchestrator.spec.ts (1)
502-525: Remove or update skipped test reflecting behavior change.This test is marked as skipped due to a behavior change where RED phase with 0 failures now auto-completes the subtask. Consider:
- Remove the test if the validation no longer applies
- Update the test to verify the new auto-complete behavior
- Add a new test that explicitly validates the auto-complete logic when RED phase has 0 failures
Keeping skipped tests with outdated behavior creates maintenance burden and confusion.
If the new behavior is intentional, replace this test with:
- // Note: When all tests pass in RED phase, the orchestrator auto-completes - // the subtask (feature already implemented) instead of throwing. - // This test is skipped as the behavior has changed. - it.skip('should validate RED phase test results have failures', async () => { + it('should auto-complete subtask when RED phase has zero failures', async () => { await orchestrator.transition({ type: 'PREFLIGHT_COMPLETE' }); await orchestrator.transition({ type: 'BRANCH_CREATED', branchName: 'feature/test' }); - // Provide passing test results (should fail RED phase validation) - await expect( - orchestrator.transition({ - type: 'RED_PHASE_COMPLETE', - testResults: { - total: 5, - passed: 5, - failed: 0, - skipped: 0, - phase: 'RED' - } - }) - ).rejects.toThrow('RED phase must have at least one failing test'); + // Provide passing test results (should auto-complete) + await orchestrator.transition({ + type: 'RED_PHASE_COMPLETE', + testResults: { + total: 5, + passed: 5, + failed: 0, + skipped: 0, + phase: 'RED' + } + }); + + // Verify subtask was auto-completed + const context = orchestrator.getContext(); + expect(context.subtasks[0].status).toBe('completed'); });packages/tm-core/src/modules/tasks/validation/task-id.ts (2)
139-155: Documentation inconsistency with validation pattern.The examples on lines 147-148 show
"1.2.3"as a valid input, but this format is now rejected byTASK_ID_PATTERN(line 16 explicitly states deep nesting is not supported). Update the documentation to reflect the current validation rules.* @example * ```typescript - * extractParentId("1.2.3"); // "1" * extractParentId("1.2"); // "1" * extractParentId("1"); // "1" + * extractParentId("HAM-1"); // "HAM-1" * ```
157-172: Documentation inconsistency with validation pattern.Line 166 shows
"1.2.3"as a valid example, but this format is now rejected by validation. Additionally, consider whether API-format IDs like"HAM-1"should returnfalsehere (they don't contain dots).* @example * ```typescript * isSubtaskId("1.2"); // true - * isSubtaskId("1.2.3"); // true * isSubtaskId("1"); // false + * isSubtaskId("HAM-1"); // false (API IDs don't use subtask notation) * ```packages/ai-sdk-provider-grok-cli/CHANGELOG.md (1)
3-17: Multiple duplicate "## null" placeholders in changelog—likely incomplete changeset processing.The file contains 8 repeated "## null" headers with no substantive version information. This pattern suggests either:
- Auto-generated intermediate state from changeset tooling that wasn't finalized
- Broken version resolution during changelog generation
- Placeholder entries that should be removed before release
Either way, this is not a valid changelog state and will confuse version tracking and changelog parsing tools.
Suggested fix: Either
- Remove all "## null" entries and consolidate actual version bumps, or
- Replace "null" with proper semantic version numbers (e.g., "0.1.0", "0.2.0") if these represent actual releases
This same issue appears across multiple changelogs in this PR (apps/mcp/CHANGELOG.md, packages/build-config/CHANGELOG.md, packages/tm-core/CHANGELOG.md, etc.).
apps/mcp/CHANGELOG.md (1)
3-50: Critical: Invalid "null" versions and dependency references across changelog entries.The changelog shows repeated "## null" headers (lines 3–50) with dependency entries listing "@tm/core@null" instead of actual version numbers. This pattern indicates:
- Broken changeset generation: The versioning system failed to resolve actual version numbers
- Dependency resolution will fail: "@tm/core@null" is not a valid semantic version reference
- Systemic issue: This same problem appears in packages/ai-sdk-provider-grok-cli/CHANGELOG.md, packages/build-config/CHANGELOG.md, and packages/tm-core/CHANGELOG.md
This needs immediate remediation before merge—either regenerate changelogs with proper versions or manually resolve the null placeholders to actual version numbers.
Expected fix: Replace all "## null" sections and "@tm/core@null" references with proper semantic version numbers. For example:
## 0.3.0 ### Patch Changes - Updated dependencies [@tm/core@0.26.2]: - @tm/core@0.26.2packages/claude-code-plugin/CHANGELOG.md (1)
11-13: Duplicate version entry in changelog.Version "0.0.3" appears at both line 11 and line 13. This is likely a copy-paste error during changelog generation or a merge artifact. One entry should be removed to keep the changelog clean and accurate.
Unlike other changelogs in this PR, this file correctly uses semantic version numbers instead of "null" placeholders—that's good. Just need to clean up the duplicate.
packages/build-config/CHANGELOG.md (1)
3-20: Duplicate "## null" entries—part of systemic changelog versioning issue.This file contains 10 "## null" placeholder sections (lines 3–20) before the actual 1.0.1 entry. This matches the problematic pattern in other changelogs (apps/mcp/CHANGELOG.md, packages/ai-sdk-provider-grok-cli/CHANGELOG.md, packages/tm-core/CHANGELOG.md).
As noted in the comment on apps/mcp/CHANGELOG.md, this indicates broken changeset version resolution across the PR. All "## null" sections need to be cleaned up and replaced with proper semantic version numbers before merge.
packages/tm-core/CHANGELOG.md (1)
3-20: Duplicate "## null" entries—part of systemic changelog versioning issue.This file has 9 "## null" placeholder sections (lines 3–20) preceding the actual 0.26.1 entry. This matches the problematic pattern across multiple changelogs in this PR.
As noted in previous comments, these null versions indicate broken changeset version resolution and must be replaced with proper semantic version numbers before merge.
packages/tm-core/src/modules/workflow/services/workflow.service.ts (1)
565-586: Document branch naming format change and implement backward compatibility for resumed workflows.PR #1491 added the
tm/prefix to autopilot branch names (format:tm/<namespace>/task-<id>-<title>). Workflows created before this change store the old branch name format in their persisted state. When resuming, the code attempts to checkout the stored branch name directly without regenerating it, which works only if the old branch still exists on disk. If users delete their old branches or expect the new format, resume will fail. Consider either: (1) implementing migration logic to detect and regenerate branch names on resume, (2) adding a version check in the workflow state to handle format upgrades, or (3) documenting this breaking change prominently in upgrade guides with instructions for users with existing workflows.
♻️ Duplicate comments (7)
packages/tm-core/src/modules/workflow/orchestrators/workflow-orchestrator.spec.ts (7)
436-461: Same async/await concern applies to guard condition tests.Guard condition evaluation should typically be synchronous. Unless guards perform I/O operations (database checks, file system access, etc.), these tests should use synchronous assertion patterns.
This is the same concern as the previous comment about unnecessary async/await usage.
489-500: Verify async necessity for validation tests.Test result validation should be synchronous. If validation logic doesn't perform I/O, this test should use synchronous assertions.
548-579: Excessive async/await usage in test setup and assertions.This test uses
awaitfor setup transitions (lines 549-550, 555-564) that previously worked synchronously throughout most of the test suite. Unless these specific transitions perform I/O, they should remain synchronous.
637-659: Git validation guard should likely be synchronous.Unless the git validation guard performs actual filesystem checks, this should be a synchronous test. Guards typically evaluate context state, which is synchronous.
1073-1088: Abort state checking should be synchronous.Checking whether a workflow is aborted is a simple state check (boolean flag) that doesn't require async operations. This test should use synchronous assertions.
1401-1424: Second skipped test with behavior change - apply same fix.This is a duplicate of the earlier skipped test pattern. The same recommendation applies: remove the test or update it to verify the new auto-complete behavior for RED phase with zero failures.
See the earlier comment on lines 502-525 for the suggested fix pattern.
1426-1459: Adapter validation test uses excessive async/await.Test result validation through adapters should be synchronous unless the validator performs I/O. The extensive use of
awaitthroughout this test (lines 1427-1428, 1435-1444, 1447-1458) is likely unnecessary.
🧹 Nitpick comments (15)
.github/workflows/claude-docs-updater.yml (1)
1-136: Complete workflow architecture for docs automation is sound.The three-workflow system (trigger → updater → optional PR creation) is well-designed: the trigger identifies changes, the updater uses Claude to intelligently update docs, and changes are proposed via PR for review. This maintains a safe workflow where Claude's documentation changes are always reviewed before merge.
Consider adding a notification step (e.g., Slack or Discord webhook) on PR creation failure to alert maintainers if automated documentation updates encounter issues.
tests/unit/mcp/tools/tool-registration.test.js (1)
137-140: Consider removing redundant environment variable assignments.Throughout the test file,
process.env.TASK_MASTER_TOOLSis set even though an explicittoolModeargument is passed toregisterTaskMasterTools. Since the implementation uses thetoolModeparameter directly (line 45 of index.js:const enabledTools = toolMode.trim()), these environment variable assignments have no effect and may confuse readers.Consider simplifying tests by removing the env var assignments when explicit toolMode is passed:
-process.env.TASK_MASTER_TOOLS = 'all'; - registerTaskMasterTools(mockServer, 'all');Alternatively, add a comment explaining that env vars are set for documentation purposes but toolMode parameter takes precedence.
Also applies to: 145-150, 155-160, 165-168, 173-176, 183-186, 192-198, 203-206, 219-222, 227-233, 238-241, 246-249
src/utils/create-mcp-config.js (2)
45-47: MCP tier comments are clear and consistent with the new TASK_MASTER_TOOLS envThe inline summary of
core/standard/alland their tool counts is helpful and matches the rest of the docs. Just keep these numbers in sync with the actual tool sets and the CLAUDE.md table if tiers evolve.
67-80: Consider moving JSON file I/O to shared readJSON/writeJSON utilitiesThis utility still uses raw
fs.readFileSync/fs.writeFileSyncfor JSON config handling. Per the shared guidelines for utils modules, it would be cleaner and more consistent to delegate JSON reads/writes (and their error handling) to the centralreadJSON/writeJSONhelpers so behavior is uniform across the codebase and file errors are handled in one place.As per coding guidelines, JSON file operations in utils should go through the shared utilities rather than direct
fscalls.Also applies to: 111-137
.taskmaster/CLAUDE.md (1)
110-127: MCP tool tiers section clearly explains default/core behavior and upgrade pathThe tier table, explicit core/standard/all tool lists, and the one-line upgrade instruction (edit
TASK_MASTER_TOOLSand restart the MCP connection) make the new tiered loading model very understandable. As the tool catalog grows, just ensure these lists and counts stay aligned with the actual MCP tooling definitions.Based on learnings, this nicely complements the MCP configuration guidance elsewhere in the repo.
packages/tm-bridge/CHANGELOG.md (1)
30-36: Avoid adding another## nullsection; use a real version or consolidate entriesThis change introduces yet another
## nullheading and identical patch block, increasing the number of duplicate “null” sections and triggering MD024. It would be better to either:
- Replace
nullwith the actual version once known, and/or- Consolidate these dependency notes under a single versioned section rather than duplicating identical
nullheadings.apps/cli/CHANGELOG.md (1)
21-27: Clean up duplicate## nullsections and dependency bulletsThese additions introduce one more “Updated dependencies” bullet and yet another
## null/### Patch Changesblock, increasing the number of duplicatenullsections (and triggering MD024). Consider:
- Using a concrete version instead of
nullonce it’s known, and- Either merging these dependency notes into a single section or removing the extra
## nullblock to avoid redundant headings.CHANGELOG.md (1)
3-11: 0.37.2-rc.0 entry looks structurally correct; consider minor copy tweakThe new section matches the project’s changelog pattern and clearly describes the fixes. One nit: Line 11 reads a bit awkwardly.
You could tighten the wording like this:
- - Solves issue some users were having regarding not being able to start workflow on master Taskmaster tag + - Resolves an issue where some users couldn't start workflows on the master Taskmaster tagpackages/tm-core/tests/integration/workflow/autopilot-workflow.test.ts (1)
41-42: Consider edge case for undefined HOME.If
HOMEis undefined in the environment (rare but possible in some CI systems), restoring it toundefinedinafterEachworks correctly, but you may want to usedelete process.env.HOMEfor cleaner semantics when restoring undefined.-const originalHome = process.env.HOME; +const originalHome = process.env.HOME ?? undefined;This is a minor edge case and the current implementation should work fine.
apps/cli/src/commands/autopilot/complete.command.ts (1)
127-143: Fallback phase values may mask unexpected states.Lines 132 and 138 use fallback values (
'GREEN'and'COMMIT') ifnewStatus.tddPhaseis undefined. While this provides a safe default, it could mask cases where the workflow enters an unexpected state. Consider logging when the fallback is used for debugging purposes.if (tddPhase === 'RED') { + const nextPhase = newStatus.tddPhase || 'GREEN'; + if (!newStatus.tddPhase) { + console.debug('No tddPhase in newStatus, defaulting to GREEN'); + } formatter.success('RED phase completed', { - nextPhase: newStatus.tddPhase || 'GREEN', + nextPhase, testResults, subtask: currentSubtask?.title }); }apps/mcp/src/tools/autopilot/commit.tool.ts (1)
146-149: Comment is misleading; ternary is always true.The comment says "RED phase = test files, GREEN phase = implementation," but at this point in the code, we've already verified
status.tddPhase === 'COMMIT'(line 68). The ternarystatus.tddPhase === 'COMMIT' ? 'feat' : 'test'will always evaluate to'feat'.Consider simplifying or clarifying:
- // Determine commit type based on phase and subtask - // RED phase = test files, GREEN phase = implementation - const type = status.tddPhase === 'COMMIT' ? 'feat' : 'test'; + // COMMIT phase always uses 'feat' type for completed subtask work + const type = 'feat';apps/cli/src/commands/autopilot/abort.command.ts (1)
65-67: Consider if resume is necessary just to get status before abort.The current flow resumes the workflow (Line 66) just to call
getStatus()(Line 67) before aborting. If the workflow is being aborted anyway, this resume step may be unnecessary overhead. Consider whethertmCore.workflowcould expose a method to get status from persisted state without fully resuming, or if the abort operation could return the pre-abort status directly.apps/cli/src/commands/autopilot/start.command.ts (1)
106-111: Avoidanytype for subtask mapping.The explicit
anytype on the subtask parameter reduces type safety. Consider typing the subtask parameter based on the task's subtask structure.- subtasks: task.subtasks.map((st: any) => ({ + subtasks: task.subtasks.map((st: { id: string; title: string; status: string }) => ({ id: st.id, title: st.title, status: st.status, maxAttempts })),packages/tm-core/src/modules/auth/services/oauth-service.ts (2)
64-64: Consider clearing keypair after successful authentication.The RSA keypair is stored as a class field and persists until the instance is garbage collected. For defense in depth, consider clearing the private key from memory after the tokens have been successfully decrypted.
+ /** + * Clear sensitive keypair data from memory + */ + private clearKeyPair(): void { + this.keyPair = null; + }Then call
this.clearKeyPair()after successful decryption inpollForCompletionor after the flow completes inauthenticateWithBackendPKCE.
406-413: Silent retry on network errors could mask persistent issues.Network errors during polling are logged at debug level and polling continues. While this provides resilience against transient failures, persistent network issues will only manifest as an eventual timeout. Consider adding a counter for consecutive network failures and surfacing a more specific error if the threshold is exceeded.
| const projectRoot = mergedOptions.projectRoot!; | ||
|
|
||
| // Initialize TmCore facade | ||
| const tmCore = await createTmCore({ projectPath: projectRoot }); |
There was a problem hiding this comment.
Non-null assertion may cause runtime error if projectRoot is undefined.
getProjectRoot() can return undefined, but the non-null assertion (!) bypasses TypeScript's null check. If mergedOptions.projectRoot is undefined, createTmCore will receive an invalid path.
Consider adding validation before the assertion:
-const projectRoot = mergedOptions.projectRoot!;
+const projectRoot = mergedOptions.projectRoot;
+if (!projectRoot) {
+ formatter.error('Project root could not be determined', {
+ suggestion: 'Specify --project-root or run from a TaskMaster project directory'
+ });
+ process.exit(1);
+}🤖 Prompt for AI Agents
In apps/cli/src/commands/autopilot/resume.command.ts around lines 40 to 43, the
code uses a non-null assertion on mergedOptions.projectRoot which can be
undefined; validate mergedOptions.projectRoot before passing it to createTmCore:
if it's undefined, either compute a safe fallback (e.g., call getProjectRoot()
again or use process.cwd()) or throw a clear error with guidance, and then pass
the validated string into createTmCore without using the `!` operator so
TypeScript and runtime are safe.
| { | ||
| "url": "https://context7.com/eyaltoledano/claude-task-master", | ||
| "public_key": "pk_52Na55p8REi9c5jSFszav" | ||
| } |
There was a problem hiding this comment.
Missing documentation and unclear purpose.
The file lacks critical context:
- No explanation of what
context7.jsonis for or where it fits into the codebase. - Relationship to the PR's stated scope (refactoring to
createTmCore, OAuth PKCE, task ID validation, etc.) is unclear. - No indication of where this file is located in the directory structure.
- No usage or integration notes.
Please provide:
- A clear explanation of the file's purpose and integration point.
- Confirmation that this file location and content align with the PR objectives.
🤖 Prompt for AI Agents
In context7.json around lines 1 to 4, the file lacks documentation and clarity
about purpose and integration: add a short comment or accompanying README
section (in the same directory) stating this file is a public client
configuration for the Context7 integration (contains public URL and public_key
used by the frontend or third-party service), specify its repository path (e.g.,
/config/context7.json or /integrations/context7/context7.json), document how and
where it is loaded (which module or bootstrap step consumes it), describe
security considerations (public vs secret keys) and intended usage (e.g., used
by OAuth PKCE flow or task reporting), and confirm in the PR description or a
header comment that the file and its contents align with the PR objectives
(createTmCore refactor, OAuth PKCE, task ID validation) by explaining which of
those features consume this file.
🧩 Analysis chain
🏁 Script executed:
# First, let's find PR-related files or metadata in the repository
git ls-files | head -20Repository: eyaltoledano/claude-task-master
Length of output: 609
🏁 Script executed:
# Look for PR or pull request related files
find . -type f -name "*pull*" -o -name "*pr*" -o -name "*PULL*" 2>/dev/null | head -20Repository: eyaltoledano/claude-task-master
Length of output: 1574
🏁 Script executed:
# Check recent commits to understand the PR context
git log --oneline -10 2>/dev/null || echo "Not a git repository or git not available"Repository: eyaltoledano/claude-task-master
Length of output: 167
🏁 Script executed:
# Check if context7.json exists and its contents
if [ -f "context7.json" ]; then
echo "=== context7.json found ==="
cat context7.json
else
echo "=== context7.json not found in root ==="
fi
# Look for any PR description files or templates
find . -type f \( -name "*.md" -o -name "*.txt" \) 2>/dev/null | grep -i "pull\|pr\|description" | head -10
# Check for .github directory with PR templates
ls -la .github/ 2>/dev/null || echo "No .github directory"Repository: eyaltoledano/claude-task-master
Length of output: 1099
🏁 Script executed:
# Check the referenced rule files from learnings
if [ -f ".cursor/rules/git_workflow.mdc" ]; then
echo "=== git_workflow.mdc found ==="
cat ".cursor/rules/git_workflow.mdc"
fi
echo ""
if [ -f ".cursor/rules/dev_workflow.mdc" ]; then
echo "=== dev_workflow.mdc found ==="
cat ".cursor/rules/dev_workflow.mdc"
fi
# List what's in .cursor/rules
echo ""
echo "=== Files in .cursor/rules ==="
ls -la .cursor/rules/ 2>/dev/null || echo "Directory not found"Repository: eyaltoledano/claude-task-master
Length of output: 44936
🏁 Script executed:
# Look for GitHub workflow files that might indicate PR requirements
find . -type f -name "*.yml" -o -name "*.yaml" 2>/dev/null | head -10
# Check for any CONTRIBUTING or similar documentation
find . -type f -name "CONTRIBUTING*" -o -name "contributing*" 2>/dev/null
# Look for any PR or merge request templates
find . -type f \( -name "*TEMPLATE*" -o -name "*template*" \) 2>/dev/null | head -10Repository: eyaltoledano/claude-task-master
Length of output: 755
🏁 Script executed:
# Search for context7.json in the entire repository
find . -name "context7.json" 2>/dev/null
# Check git status if this is a git repo
git status 2>/dev/null | head -30 || echo "Not a git repository"
# Check if there's a git diff
git diff --stat 2>/dev/null | head -20 || echo "No git diff available"Repository: eyaltoledano/claude-task-master
Length of output: 314
🏁 Script executed:
# List all files in the repository root
ls -la | head -30
# Check if there's a .git directory
[ -d ".git" ] && echo "Git repository found" || echo "Not a git repository"
# Look for any JSON files that might be PR descriptions
find . -maxdepth 3 -name "*.json" 2>/dev/null | head -20Repository: eyaltoledano/claude-task-master
Length of output: 2740
🏁 Script executed:
# Since context7.json is mentioned, let's search for it more broadly
rg "context7" --type json 2>/dev/null || echo "No matches found"
# Check the current directory structure
pwd
echo "---"
find . -type f -name "*.json" 2>/dev/null | head -20Repository: eyaltoledano/claude-task-master
Length of output: 972
🏁 Script executed:
# Search for references to the PR title mentioned in the review
rg "Ralph.*chore.*update.*main.*dec" 2>/dev/null || echo "PR title not found in repository"
# Look for any CI/CD configuration that might enforce PR requirements
find . -type f \( -name ".github*" -o -name ".gitlab*" -o -name ".gitea*" \) 2>/dev/null
# Check for any Python or Node script files that might be related
ls -la *.py *.js 2>/dev/null || echo "No top-level scripts found"Repository: eyaltoledano/claude-task-master
Length of output: 675
🌐 Web query:
eyaltoledano claude-task-master PR guidelines git workflow
💡 Result:
Summary of PR & Git workflow for eyaltoledano/claude-task-master
Key rules (short):
- Fork the repo, clone, install deps locally. [1]
- Always base feature branches off the next branch (not main). Use descriptive names like feature/your-feature-name. [1]
- Run tests, format checks and create a changeset before opening a PR: npm test, npm run format-check (or npm run format), npm run changeset. [1][2]
- Target your PR at the next branch. Include a clear description and reference related issues. [1]
- Follow project dev guidelines (tests, changesets, PRD if implementing larger features). [1][2]
- Use gh / git tooling for branch/PR automation where helpful; custom git logic (branch validation, PR description generation) can be layered on top of native git + gh. [5]
Helpful commands (examples):
- git checkout next && git pull origin next && git checkout -b feature/your-feature-name. [1]
- npm test; npm run format-check; npm run changeset. [1]
- gh pr create --base next --title "..." --body "..." (or use GitHub web UI). [5]
References
- Contributing / Quick start & PR guidance (Task Master docs). [1]
- Project README / usage and PRD guidance (eyaltoledano/claude-task-master). [2]
- Repository release notes & workflow details (GitHub releases). [3]
- DeepWiki docs summary for PRD-driven development and templates. [4]
- Discussion / notes about using gh and automating git workflows. [5]
If you want, I can generate a ready-to-use PR template, branch-name linter rule, or a CI checklist file for this repo.
🏁 Script executed:
# Look for documentation files
find . -type f \( -name "README*" -o -name "readme*" \) 2>/dev/null | head -5
# If found, show relevant sections
if [ -f "README.md" ]; then
echo "=== README.md found ==="
head -100 README.md
fiRepository: eyaltoledano/claude-task-master
Length of output: 6035
🏁 Script executed:
# Get basic repository information
echo "=== Current directory ==="
pwd
echo ""
echo "=== Root directory contents ==="
ls -1
echo ""
echo "=== Check for context.json files ==="
find . -name "context*.json" -type f 2>/dev/null
echo ""
echo "=== Check .cursor directory ==="
[ -d ".cursor" ] && ls -la .cursor/ || echo ".cursor directory not found"Repository: eyaltoledano/claude-task-master
Length of output: 1114
🏁 Script executed:
# Directly read the rule files mentioned in learnings
echo "=== Checking .cursor/rules/git_workflow.mdc ==="
cat .cursor/rules/git_workflow.mdc 2>/dev/null || echo "File not found"
echo ""
echo "=== Checking .cursor/rules/dev_workflow.mdc ==="
cat .cursor/rules/dev_workflow.mdc 2>/dev/null || echo "File not found"
echo ""
echo "=== List all cursor rules ==="
ls -la .cursor/rules/ 2>/dev/null || echo ".cursor/rules directory not found"Repository: eyaltoledano/claude-task-master
Length of output: 44969
PR description and title do not meet project guidelines.
Per the project's documented git workflow, PR titles must follow the format Task <ID>: <Task Title> (e.g., "Task 4: Setup Express.js Server Project"). The current title Ralph/chore/update.from.main.dec.10 does not follow this format.
PR descriptions must include:
- Task Overview
- Subtasks Completed (checklist)
- Implementation Details
- Testing approach
- Breaking Changes (if any)
- Related Tasks
Please update:
- PR title to follow the required format:
Task <ID>: <Task Title> - PR description with all required sections, including why
context7.jsonis necessary and its purpose in the deployment context
🤖 Prompt for AI Agents
In context7.json around lines 1 to 4, the PR title and description do not follow
the project's git workflow guidelines; update the PR so the title matches the
format "Task <ID>: <Task Title>" (pick the correct task ID and concise title)
and replace the body with the required sections: Task Overview, Subtasks
Completed (checklist), Implementation Details, Testing approach, Breaking
Changes (if any), and Related Tasks; include a clear explanation of why
context7.json is required and what its "url" and "public_key" fields are used
for in the deployment context.
| /** | ||
| * @fileoverview Unit tests for task ID schemas | ||
| */ | ||
|
|
||
| import { describe, expect, it } from 'vitest'; | ||
| import { | ||
| MainTaskIdSchema, | ||
| TaskIdSchema, | ||
| normalizeDisplayId | ||
| } from './task-id.schema.js'; | ||
|
|
||
| describe('normalizeDisplayId', () => { | ||
| describe('file storage IDs (numeric)', () => { | ||
| it('should return numeric main task IDs unchanged', () => { | ||
| expect(normalizeDisplayId('1')).toBe('1'); | ||
| expect(normalizeDisplayId('123')).toBe('123'); | ||
| }); | ||
|
|
||
| it('should return numeric subtask IDs unchanged', () => { | ||
| expect(normalizeDisplayId('1.1')).toBe('1.1'); | ||
| expect(normalizeDisplayId('123.45')).toBe('123.45'); | ||
| }); | ||
|
|
||
| it('should trim whitespace', () => { | ||
| expect(normalizeDisplayId(' 1 ')).toBe('1'); | ||
| expect(normalizeDisplayId(' 1.2 ')).toBe('1.2'); | ||
| }); | ||
| }); | ||
|
|
||
| describe('API storage IDs (prefixed)', () => { | ||
| it('should normalize lowercase without hyphen', () => { | ||
| expect(normalizeDisplayId('ham1')).toBe('HAM-1'); | ||
| expect(normalizeDisplayId('ham123')).toBe('HAM-123'); | ||
| }); | ||
|
|
||
| it('should normalize uppercase without hyphen', () => { | ||
| expect(normalizeDisplayId('HAM1')).toBe('HAM-1'); | ||
| expect(normalizeDisplayId('HAM123')).toBe('HAM-123'); | ||
| }); | ||
|
|
||
| it('should normalize lowercase with hyphen', () => { | ||
| expect(normalizeDisplayId('ham-1')).toBe('HAM-1'); | ||
| expect(normalizeDisplayId('ham-123')).toBe('HAM-123'); | ||
| }); | ||
|
|
||
| it('should keep uppercase with hyphen unchanged', () => { | ||
| expect(normalizeDisplayId('HAM-1')).toBe('HAM-1'); | ||
| expect(normalizeDisplayId('HAM-123')).toBe('HAM-123'); | ||
| }); | ||
|
|
||
| it('should normalize mixed case', () => { | ||
| expect(normalizeDisplayId('Ham-1')).toBe('HAM-1'); | ||
| expect(normalizeDisplayId('hAm1')).toBe('HAM-1'); | ||
| }); | ||
|
|
||
| it('should trim whitespace', () => { | ||
| expect(normalizeDisplayId(' ham1 ')).toBe('HAM-1'); | ||
| expect(normalizeDisplayId(' HAM-1 ')).toBe('HAM-1'); | ||
| }); | ||
| }); | ||
|
|
||
| describe('edge cases', () => { | ||
| it('should return empty string for empty input', () => { | ||
| expect(normalizeDisplayId('')).toBe(''); | ||
| }); | ||
|
|
||
| it('should return null/undefined as-is', () => { | ||
| expect(normalizeDisplayId(null as any)).toBe(null); | ||
| expect(normalizeDisplayId(undefined as any)).toBe(undefined); | ||
| }); | ||
|
|
||
| it('should return unmatched patterns as-is', () => { | ||
| expect(normalizeDisplayId('abc')).toBe('abc'); | ||
| expect(normalizeDisplayId('HAMSTER-1')).toBe('HAMSTER-1'); // 7 letters, not 3 | ||
| expect(normalizeDisplayId('AB-1')).toBe('AB-1'); // 2 letters, not 3 | ||
| }); | ||
| }); | ||
| }); | ||
|
|
||
| describe('TaskIdSchema', () => { | ||
| describe('file storage IDs', () => { | ||
| it('should accept numeric main task IDs', () => { | ||
| expect(TaskIdSchema.safeParse('1').success).toBe(true); | ||
| expect(TaskIdSchema.safeParse('123').success).toBe(true); | ||
| }); | ||
|
|
||
| it('should accept numeric subtask IDs (one level)', () => { | ||
| expect(TaskIdSchema.safeParse('1.1').success).toBe(true); | ||
| expect(TaskIdSchema.safeParse('123.45').success).toBe(true); | ||
| }); | ||
|
|
||
| it('should reject deeply nested IDs', () => { | ||
| expect(TaskIdSchema.safeParse('1.2.3').success).toBe(false); | ||
| expect(TaskIdSchema.safeParse('1.2.3.4').success).toBe(false); | ||
| }); | ||
|
|
||
| it('should return normalized value', () => { | ||
| const result = TaskIdSchema.safeParse(' 1 '); | ||
| expect(result.success).toBe(true); | ||
| if (result.success) { | ||
| expect(result.data).toBe('1'); | ||
| } | ||
| }); | ||
| }); | ||
|
|
||
| describe('API storage IDs', () => { | ||
| it('should accept prefixed IDs with hyphen', () => { | ||
| expect(TaskIdSchema.safeParse('HAM-1').success).toBe(true); | ||
| expect(TaskIdSchema.safeParse('ham-1').success).toBe(true); | ||
| }); | ||
|
|
||
| it('should accept prefixed IDs without hyphen', () => { | ||
| expect(TaskIdSchema.safeParse('HAM1').success).toBe(true); | ||
| expect(TaskIdSchema.safeParse('ham1').success).toBe(true); | ||
| }); | ||
|
|
||
| it('should reject prefixed subtask IDs', () => { | ||
| expect(TaskIdSchema.safeParse('HAM-1.2').success).toBe(false); | ||
| expect(TaskIdSchema.safeParse('ham1.2').success).toBe(false); | ||
| }); | ||
|
|
||
| it('should normalize to uppercase with hyphen', () => { | ||
| const result = TaskIdSchema.safeParse('ham1'); | ||
| expect(result.success).toBe(true); | ||
| if (result.success) { | ||
| expect(result.data).toBe('HAM-1'); | ||
| } | ||
| }); | ||
| }); | ||
|
|
||
| describe('invalid inputs', () => { | ||
| it('should reject empty string', () => { | ||
| expect(TaskIdSchema.safeParse('').success).toBe(false); | ||
| }); | ||
|
|
||
| it('should reject whitespace only', () => { | ||
| expect(TaskIdSchema.safeParse(' ').success).toBe(false); | ||
| }); | ||
|
|
||
| it('should reject invalid formats', () => { | ||
| expect(TaskIdSchema.safeParse('abc').success).toBe(false); | ||
| expect(TaskIdSchema.safeParse('HAMSTER-1').success).toBe(false); | ||
| expect(TaskIdSchema.safeParse('AB-1').success).toBe(false); | ||
| }); | ||
| }); | ||
| }); | ||
|
|
||
| describe('MainTaskIdSchema', () => { | ||
| describe('valid main tasks', () => { | ||
| it('should accept numeric main task IDs', () => { | ||
| expect(MainTaskIdSchema.safeParse('1').success).toBe(true); | ||
| expect(MainTaskIdSchema.safeParse('123').success).toBe(true); | ||
| }); | ||
|
|
||
| it('should accept prefixed main task IDs', () => { | ||
| expect(MainTaskIdSchema.safeParse('HAM-1').success).toBe(true); | ||
| expect(MainTaskIdSchema.safeParse('ham-1').success).toBe(true); | ||
| expect(MainTaskIdSchema.safeParse('ham1').success).toBe(true); | ||
| }); | ||
|
|
||
| it('should normalize prefixed IDs', () => { | ||
| const result = MainTaskIdSchema.safeParse('ham1'); | ||
| expect(result.success).toBe(true); | ||
| if (result.success) { | ||
| expect(result.data).toBe('HAM-1'); | ||
| } | ||
| }); | ||
| }); | ||
|
|
||
| describe('invalid subtasks', () => { | ||
| it('should reject numeric subtask IDs', () => { | ||
| const result = MainTaskIdSchema.safeParse('1.2'); | ||
| expect(result.success).toBe(false); | ||
| if (!result.success) { | ||
| expect(result.error.issues[0].message).toContain('Subtask'); | ||
| } | ||
| }); | ||
|
|
||
| it('should reject prefixed subtask IDs', () => { | ||
| expect(MainTaskIdSchema.safeParse('HAM-1.2').success).toBe(false); | ||
| }); | ||
| }); | ||
|
|
||
| describe('error messages', () => { | ||
| it('should provide helpful error for invalid format', () => { | ||
| const result = MainTaskIdSchema.safeParse('invalid'); | ||
| expect(result.success).toBe(false); | ||
| if (!result.success) { | ||
| expect(result.error.issues[0].message).toContain('Invalid task ID'); | ||
| } | ||
| }); | ||
|
|
||
| it('should provide helpful error for subtask', () => { | ||
| const result = MainTaskIdSchema.safeParse('1.2'); | ||
| expect(result.success).toBe(false); | ||
| if (!result.success) { | ||
| expect(result.error.issues[0].message).toContain('Subtask'); | ||
| } | ||
| }); | ||
| }); | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
fd -t f "task-id.schema.ts" --not -path "*.spec.ts"Repository: eyaltoledano/claude-task-master
Length of output: 322
🏁 Script executed:
fd "task-id.schema" packages/tm-core/src --type fRepository: eyaltoledano/claude-task-master
Length of output: 187
🏁 Script executed:
cat -n packages/tm-core/src/common/schemas/task-id.schema.tsRepository: eyaltoledano/claude-task-master
Length of output: 7958
🏁 Script executed:
grep -n "SchemaForMcp\|MCP" packages/tm-core/src/common/schemas/task-id.schema.spec.tsRepository: eyaltoledano/claude-task-master
Length of output: 57
🏁 Script executed:
rg "TaskIdSchemaForMcp|MainTaskIdSchemaForMpc" packages/tm-core/src --type tsRepository: eyaltoledano/claude-task-master
Length of output: 682
🏁 Script executed:
grep -A 5 "Excellent test suite" packages/tm-core/src/common/schemas/task-id.schema.spec.ts || echo "Text not in test file, checking comment structure"Repository: eyaltoledano/claude-task-master
Length of output: 124
🏁 Script executed:
wc -l packages/tm-core/src/common/schemas/task-id.schema.spec.tsRepository: eyaltoledano/claude-task-master
Length of output: 137
Add tests for MCP schema variants to achieve complete coverage.
The test file covers normalizeDisplayId, TaskIdSchema, and MainTaskIdSchema well, but is missing tests for the exported TaskIdSchemaForMpc and MainTaskIdSchemaForMpc variants. These are documented public APIs intended for MCP tool parameter schemas and should have test coverage to meet the 80%+ coverage requirement.
Add test cases verifying that the MCP variants pass validation without applying transforms, ensuring they match the behavior of their non-MCP counterparts for validation logic while skipping normalization.
| function isValidTaskIdFormat(id: string): boolean { | ||
| if (!id) return false; | ||
| const trimmed = id.trim(); | ||
|
|
||
| // File storage: numeric main or subtask (one level only) | ||
| if (FILE_MAIN_PATTERN.test(trimmed) || FILE_SUBTASK_PATTERN.test(trimmed)) { | ||
| return true; | ||
| } | ||
|
|
||
| // API storage: prefixed main task only (no subtasks in API storage) | ||
| if (API_MAIN_PATTERN.test(trimmed)) { | ||
| return true; | ||
| } | ||
|
|
||
| return false; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
echo "=== Searching for isValidTaskIdFormat in schema module ==="
find . -path "./packages/tm-core/src/common/schemas/task-id.schema.ts" -type f | head -1 | xargs -I {} sh -c 'echo "File: {}"; wc -l {}'
echo ""
echo "=== Searching for isValidTaskIdFormat function definitions ==="
rg -n "function isValidTaskIdFormat|export.*isValidTaskIdFormat" --type=ts -A 2
echo ""
echo "=== Searching for pattern constants ==="
rg -n "FILE_MAIN_PATTERN|FILE_SUBTASK_PATTERN|API_MAIN_PATTERN" --type=ts
echo ""
echo "=== Check if validation module exists and contents ==="
find . -path "*/modules/tasks/validation/task-id.ts" -type f 2>/dev/nullRepository: eyaltoledano/claude-task-master
Length of output: 1750
🏁 Script executed:
#!/bin/bash
echo "=== TASK_ID_PATTERN definition in validation module ==="
cat packages/tm-core/src/modules/tasks/validation/task-id.ts
echo ""
echo "=== Where schema isValidTaskIdFormat is used ==="
rg -n "isValidTaskIdFormat" packages/tm-core/src/common/schemas/task-id.schema.ts
echo ""
echo "=== Where validation isValidTaskIdFormat is imported/used ==="
rg -n "isValidTaskIdFormat" --type=ts -B 2 -A 2 | grep -v "^packages/tm-core/src/common/schemas" | head -40Repository: eyaltoledano/claude-task-master
Length of output: 8734
🏁 Script executed:
#!/bin/bash
echo "=== Validation schema module snippet ==="
sed -n '63,97p' packages/tm-core/src/common/schemas/task-id.schema.ts
echo ""
echo "=== Validation module TASK_ID_PATTERN ==="
sed -n '30,31p' packages/tm-core/src/modules/tasks/validation/task-id.ts
echo ""
echo "=== Test case checking 1.2.3 ==="
rg -n "1.2.3" packages/tm-core/src/modules/tasks/validation/task-id.spec.ts -B 1 -A 1Repository: eyaltoledano/claude-task-master
Length of output: 1833
Two implementations of isValidTaskIdFormat have divergent validation logic that must be reconciled.
The schema module (lines 82-97) rejects deep nesting like "1.2.3" per its FILE_SUBTASK_PATTERN (one dot only), but the exported function in packages/tm-core/src/modules/tasks/validation/task-id.ts accepts "1.2.3" as valid. This inconsistency violates the documented file storage constraint ("exactly one level" of subtasks). The validation module's TASK_ID_PATTERN regex and test cases (spec lines 37, 61) expect deep nesting to be valid, contradicting the schema's design. Align both implementations to enforce consistent rules across schema refinements and runtime validation.
| // Note: normalizeDisplayId is now exported from common/schemas/task-id.schema.ts | ||
| export { | ||
| generateTaskId as generateId, // Alias for backward compatibility | ||
| generateTaskId, | ||
| generateSubtaskId, | ||
| isValidTaskId, | ||
| isValidSubtaskId, | ||
| getParentTaskId, | ||
| normalizeDisplayId | ||
| getParentTaskId | ||
| } from './id-generator.js'; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
rg -n "normalizeDisplayId" --type ts --type jsRepository: eyaltoledano/claude-task-master
Length of output: 5062
🏁 Script executed:
# Check what's exported from tm-core public API (index.ts)
cat -n packages/tm-core/src/index.ts | head -100Repository: eyaltoledano/claude-task-master
Length of output: 3245
🏁 Script executed:
# Check if normalizeDisplayId is still exported from id-generator.js
rg -n "export.*normalizeDisplayId" packages/tm-core/src/common/utils/id-generator.tsRepository: eyaltoledano/claude-task-master
Length of output: 135
🏁 Script executed:
# Check if normalizeDisplayId is the ONLY duplicate or if there are others
diff <(grep -n "export function " packages/tm-core/src/common/utils/id-generator.ts) <(grep -n "export function " packages/tm-core/src/common/schemas/task-id.schema.ts) | grep normalizeDisplayIdRepository: eyaltoledano/claude-task-master
Length of output: 162
🏁 Script executed:
# Verify that id-generator.ts no longer appears in utils/index.ts exports
cat packages/tm-core/src/common/utils/index.tsRepository: eyaltoledano/claude-task-master
Length of output: 1871
Centralizing normalizeDisplayId export looks correct; cleanup needed for duplicate definition
Moving normalizeDisplayId out of the common utils index and into common/schemas/task-id.schema.ts aligns with the utilities guidelines to centralize task ID formatting logic and matches the minimal, need-based export strategy for tm-core indexes.
Verification confirms no breaking changes: no remaining imports of normalizeDisplayId from packages/tm-core/src/common/utils/index.ts exist elsewhere in the repo. The function is correctly accessible through the public API via common/schemas/index.ts.
However, remove the duplicate normalizeDisplayId export from packages/tm-core/src/common/utils/id-generator.ts (line 164) since it's no longer re-exported from the utils barrel and task-id.schema.ts is now the canonical location. This eliminates confusion and avoids maintaining two copies of the same function.
🤖 Prompt for AI Agents
In packages/tm-core/src/common/utils/id-generator.ts around line 164, there's a
duplicate export of normalizeDisplayId that should be removed because
normalizeDisplayId is now canonical in common/schemas/task-id.schema.ts; delete
the export (and any local duplicate implementation if present) from
id-generator.ts so the symbol is only provided from the task-id.schema location
and ensure the file still exports the other id functions unchanged.
| if (!credentials.refreshToken) { | ||
| this.logger.warn( | ||
| 'No refresh token received from server - session refresh will not work' | ||
| ); | ||
| } | ||
|
|
||
| this.logger.info( | ||
| `OAuth session started - ${cliData.name} v${cliData.version} on port ${port}` | ||
| ); | ||
| this.logger.debug('CLI data:', cliData); | ||
| const session: Session = { | ||
| access_token: credentials.token, | ||
| refresh_token: credentials.refreshToken ?? '', | ||
| expires_in: credentials.expiresAt | ||
| ? Math.floor( | ||
| (new Date(credentials.expiresAt).getTime() - Date.now()) / 1000 | ||
| ) | ||
| : 3600, | ||
| token_type: 'bearer', | ||
| user: { | ||
| id: credentials.userId, | ||
| email: credentials.email, | ||
| app_metadata: {}, | ||
| user_metadata: {}, | ||
| aud: 'authenticated', | ||
| created_at: '' | ||
| } | ||
| }; |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Supabase setSession empty refresh_token behavior authentication
💡 Result:
Short answer
- supabase.auth.setSession expects a valid access_token and refresh_token; passing an empty/blank refresh_token is not a supported way to clear or reset the client session and will not reliably clear persisted session state (it will either noop or produce an error when the token is invalid). [1][2]
- To remove/clear a client session use supabase.auth.signOut() (or explicitly remove the stored tokens from your storage implementation). signOut removes the stored session and triggers a SIGNED_OUT event. [3]
Sources
- setSession docs (requires access_token + refresh_token). [1]
- Community/GitHub reports showing setSession not persisting/behaving unexpectedly and that calling setSession with bad/empty tokens does not reliably clear client state. [2]
- signOut / session docs (recommended method to clear client session and storage). [3]
If you want, tell me the exact client (supabase-js v1/v2, server vs browser) and I’ll give a minimal code snippet to correctly clear or re-initialize the session.
🏁 Script executed:
# Check the context around the setSession call and see how it's used
cd packages/tm-core/src/modules/auth/services && grep -n "setSession" oauth-service.ts -A 10 -B 5Repository: eyaltoledano/claude-task-master
Length of output: 455
🏁 Script executed:
# Search for other setSession calls in the codebase to understand the pattern
rg "setSession" --type ts --type tsx -B 3 -A 3Repository: eyaltoledano/claude-task-master
Length of output: 102
🏁 Script executed:
# Check the credential-store to understand when refreshToken could be undefined
fd "credential-store" --type fRepository: eyaltoledano/claude-task-master
Length of output: 57
🏁 Script executed:
# Search for how getCredentials is called and what happens with missing refreshToken
rg "getCredentials|refreshToken" packages/tm-core/src --type ts -B 2 -A 2 | head -80Repository: eyaltoledano/claude-task-master
Length of output: 6298
🏁 Script executed:
# Check what happens after setSession - is there error handling?
sed -n '199,240p' packages/tm-core/src/modules/auth/services/oauth-service.tsRepository: eyaltoledano/claude-task-master
Length of output: 1025
🏁 Script executed:
# Check if there's a try-catch around this code
sed -n '180,250p' packages/tm-core/src/modules/auth/services/oauth-service.tsRepository: eyaltoledano/claude-task-master
Length of output: 1764
Do not call setSession with an empty refresh_token—Supabase does not support this pattern.
The code contains contradictory intent: the comments state "Supabase requires a valid refresh_token to manage token lifecycle" and "Only set session if we have a valid refresh token," yet the code unconditionally calls setSession with credentials.refreshToken ?? '' (line 207), passing an empty string when the token is missing. Supabase's setSession expects both a valid access_token and refresh_token; passing an empty refresh token will cause unpredictable behavior (noop or error) and prevent token refresh flows from working.
Either skip the setSession call entirely when refreshToken is missing, or throw an error to fail explicitly rather than create a session in an invalid state.
🤖 Prompt for AI Agents
In packages/tm-core/src/modules/auth/services/oauth-service.ts around lines 199
to 222, the code currently builds a Session with refresh_token:
credentials.refreshToken ?? '' and will call setSession with an empty refresh
token; change the flow to only construct and pass a Session to setSession when
credentials.refreshToken is present (i.e., if (!credentials.refreshToken) keep
the existing warn and then skip creating/passing a Session), otherwise proceed
to build the Session with the real refresh token and call setSession; do not set
refresh_token to an empty string or call setSession when refreshToken is
missing.
| /** | ||
| * Pattern for validating a single task ID | ||
| * Supports: | ||
| * Permissive input - accepts with or without hyphen for API IDs | ||
| * - Numeric: "1", "15", "999" | ||
| * - Numeric subtasks: "1.2", "15.3.1" | ||
| * - Alphanumeric display IDs: "HAM-123", "PROJ-456" (main tasks only, no subtask notation) | ||
| * - Numeric subtasks: "1.2" (one level only) | ||
| * - API display IDs: "HAM-1", "ham-1", "HAM1", "ham1" | ||
| */ | ||
| export const TASK_ID_PATTERN = /^(\d+(\.\d+)*|[A-Za-z]+-\d+)$/; | ||
| export const TASK_ID_PATTERN = /^(\d+(\.\d+)?|[A-Za-z]{3}-?\d+)$/; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the file to understand the context and documentation
head -35 packages/tm-core/src/modules/tasks/validation/task-id.tsRepository: eyaltoledano/claude-task-master
Length of output: 1133
🏁 Script executed:
# Search for usage of deep-nested task IDs (1.2.3 pattern) in codebase
rg '\d+\.\d+\.\d+' --type=ts --type=jsRepository: eyaltoledano/claude-task-master
Length of output: 10733
🏁 Script executed:
# Search for any references to multi-level or deep nesting in docs/comments
rg -i 'multi.*level|deep.*nest|sub-?subtask' --type=ts --type=js --type=mdRepository: eyaltoledano/claude-task-master
Length of output: 1626
🏁 Script executed:
# Look for task ID usage in tests and other validation files
fd -e ts -e js | xargs grep -l "TASK_ID_PATTERN\|subtask" | head -20Repository: eyaltoledano/claude-task-master
Length of output: 1017
🏁 Script executed:
# Search for the old pattern to understand what changed
rg '\(\\.\\d\+\)\*' --type=ts --type=jsRepository: eyaltoledano/claude-task-master
Length of output: 57
🏁 Script executed:
# Read the full test file for task-id validation
cat -n packages/tm-core/src/modules/tasks/validation/task-id.spec.tsRepository: eyaltoledano/claude-task-master
Length of output: 6521
🏁 Script executed:
# Read the Zod schema file
cat -n packages/tm-core/src/common/schemas/task-id.schema.tsRepository: eyaltoledano/claude-task-master
Length of output: 7958
🏁 Script executed:
# Check id-generator.ts to understand nested subtask support
cat -n packages/tm-core/src/common/utils/id-generator.ts | head -100Repository: eyaltoledano/claude-task-master
Length of output: 3463
🏁 Script executed:
# Check what the cross-tag-error-display tests expect
sed -n '1,50p' tests/unit/scripts/modules/ui/cross-tag-error-display.test.jsRepository: eyaltoledano/claude-task-master
Length of output: 1431
🏁 Script executed:
# Let's verify the actual pattern more carefully and check if there are inconsistencies
cat -n packages/tm-core/src/modules/tasks/validation/task-id.ts | sed -n '1,50p'Repository: eyaltoledano/claude-task-master
Length of output: 2016
🏁 Script executed:
# Check if there are multiple task-id validation files that might have different patterns
find . -name '*task-id*' -type f | grep -v node_modules | grep -v distRepository: eyaltoledano/claude-task-master
Length of output: 314
🏁 Script executed:
# Let's check the learnings one more time - see if assets/AGENTS.md actually mentions deep subtasks
cat -n assets/AGENTS.md | grep -A5 -B5 "Sub-subtask"Repository: eyaltoledano/claude-task-master
Length of output: 383
🏁 Script executed:
# Let's verify what the actual test expectations are in the test file being reviewed
grep -n "expect.*TASK_ID_PATTERN.*test" packages/tm-core/src/modules/tasks/validation/task-id.spec.tsRepository: eyaltoledano/claude-task-master
Length of output: 1195
🏁 Script executed:
# Check if there's a mismatch between what isValidTaskIdFormat is supposed to do vs TASK_ID_PATTERN
grep -n "1\.2\.3\|15\.3\.1" packages/tm-core/src/modules/tasks/validation/task-id.spec.tsRepository: eyaltoledano/claude-task-master
Length of output: 343
🏁 Script executed:
# Let's confirm the old pattern by looking at git history or any comments that show the previous state
grep -n "(\\.\\d\+)\*" packages/tm-core/src/modules/tasks/validation/task-id.tsRepository: eyaltoledano/claude-task-master
Length of output: 57
🏁 Script executed:
# Check if there are any other files that reference the pattern change or discuss the breaking change
rg "subtask.*depth|deep.*nest|one.*level" packages/tm-core --type=ts -A2 -B2Repository: eyaltoledano/claude-task-master
Length of output: 6368
Test expectations conflict with the new pattern: tests expect deep nesting to pass but the pattern rejects it.
The regex change from (\.\d+)* to (\.\d+)? restricts subtask depth to a single level, aligning with the documented design ("one level only"). However, the test file at lines 37-38 and 61 expect TASK_ID_PATTERN.test('1.2.3') and isValidTaskIdFormat('1.2.3') to return true, which contradicts the new pattern and will fail. The Zod schema in task-id.schema.ts and its tests correctly enforce the same one-level restriction, but the module's test file is inconsistent. Additionally, assets/AGENTS.md documents support for "Sub-subtasks: 1.1.1, 1.1.2", which conflicts with the validation layer. Update the failing test expectations or clarify whether deep nesting support was intentionally removed.
🤖 Prompt for AI Agents
packages/tm-core/src/modules/tasks/validation/task-id.ts lines 23-30: the regex
was intentionally changed to allow only one subtask level but tests (at lines
~37-38 and ~61) and assets/AGENTS.md still expect deep nesting (e.g., "1.2.3");
update the failing unit tests to assert that deep nesting returns false
(TASK_ID_PATTERN.test('1.2.3') and isValidTaskIdFormat('1.2.3') should be false)
and adjust or annotate assets/AGENTS.md to remove or clarify the examples for
sub-subtasks so documentation matches the one-level-only validation enforced by
the Zod schema and this regex.
| getStatus(): WorkflowStatus { | ||
| return this.workflowService.getStatus(); | ||
| return this.getWorkflowService().getStatus(); | ||
| } | ||
|
|
||
| /** | ||
| * Get workflow context | ||
| */ | ||
| getContext(): WorkflowContext { | ||
| return this.workflowService.getContext(); | ||
| return this.getWorkflowService().getContext(); | ||
| } | ||
|
|
||
| /** | ||
| * Get next action to perform in workflow | ||
| */ | ||
| getNextAction(): NextAction { | ||
| return this.workflowService.getNextAction(); | ||
| return this.getWorkflowService().getNextAction(); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's look at the file structure and locate the getWorkflowService method
fd "workflow-domain.ts" --type fRepository: eyaltoledano/claude-task-master
Length of output: 131
🏁 Script executed:
# Get the full context of the file to understand the class structure
wc -l packages/tm-core/src/modules/workflow/workflow-domain.tsRepository: eyaltoledano/claude-task-master
Length of output: 135
🏁 Script executed:
# Read the file to understand the getWorkflowService implementation
cat -n packages/tm-core/src/modules/workflow/workflow-domain.tsRepository: eyaltoledano/claude-task-master
Length of output: 5102
🏁 Script executed:
# Find the WorkflowService implementation
fd "workflow.service" --type fRepository: eyaltoledano/claude-task-master
Length of output: 141
🏁 Script executed:
# Get the size of the file
wc -l packages/tm-core/src/modules/workflow/services/workflow.service.tsRepository: eyaltoledano/claude-task-master
Length of output: 145
🏁 Script executed:
# Read the WorkflowService file to understand state handling
cat -n packages/tm-core/src/modules/workflow/services/workflow.service.tsRepository: eyaltoledano/claude-task-master
Length of output: 20816
🏁 Script executed:
# Search for tests related to WorkflowDomain to understand expected usage
fd "workflow.*domain.*test" -type fRepository: eyaltoledano/claude-task-master
Length of output: 245
🏁 Script executed:
# Also search for any callers of getStatus, getContext, getNextAction to see usage patterns
rg "getStatus\(\)|getContext\(\)|getNextAction\(\)" --type ts -B 2 -A 2Repository: eyaltoledano/claude-task-master
Length of output: 50387
🏁 Script executed:
# Check if there are any guards/checks before calling these methods
rg "hasWorkflow\(\)" --type ts -B 3 -A 3Repository: eyaltoledano/claude-task-master
Length of output: 13119
Document that getStatus(), getContext(), and getNextAction() require an active workflow.
These synchronous methods will throw "No active workflow. Start or resume a workflow first." if called without a prior start() or resume(). Add JSDoc noting this requirement, or alternatively, add a guard check in WorkflowDomain that calls hasWorkflow() and resumes if needed before delegating. Currently all callers guard properly, but documenting this contract prevents future misuse.
🤖 Prompt for AI Agents
In packages/tm-core/src/modules/workflow/workflow-domain.ts around lines 97 to
113, document and guard the synchronous accessors: add JSDoc to getStatus(),
getContext(), and getNextAction() that states they require an active workflow
(i.e., start() or resume() must have been called) and that they will throw "No
active workflow. Start or resume a workflow first." if none exists;
additionally, add a small guard at the top of each method that checks
this.hasWorkflow() and throws the same error (or resumes if the intended
behavior is to auto-resume) before delegating to getWorkflowService(), so
callers are protected and the contract is explicit.
What type of PR is this?
Description
Related Issues
How to Test This
# Example commands or stepsExpected result:
Contributor Checklist
npm run changesetnpm testnpm run format-check(ornpm run formatto fix)Changelog Entry
For Maintainers
Summary by CodeRabbit
Release Notes
New Features
finalizecommand to complete autopilot workflowstm/prefix for improved task organizationBug Fixes
Configuration Changes
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.