Feat: add typescript and start refactoring#1190
Conversation
|
|
Important Review skippedReview was skipped as selected files did not have any reviewable changes. 💤 Files selected but had no reviewable changes (3)
You can disable this status message by setting the WalkthroughAdds a new monorepo layout with packages @tm/core and @tm/cli, implementing a TypeScript-first core library (config, storage, services, entities, errors, auth, logger, AI/provider contracts), a packaged CLI with commands and UI utilities, asset resolution updates, build/test configuration, and migration/docs artifacts and task reports. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant User
participant CLI as @tm/cli (ListTasksCommand)
participant Core as @tm/core (TaskMasterCore)
participant Cfg as ConfigManager
participant Svc as TaskService
participant Factory as StorageFactory
participant Stor as Storage (File/API)
User->>CLI: tm list [--tag,--status,...]
CLI->>Core: getTaskList(opts)
Core->>Cfg: create(projectPath) / getConfig()
Core->>Svc: getTaskList(opts)
Svc->>Factory: create(config, projectPath)
Factory->>Cfg: inspect storage config / AuthManager
Factory->>Stor: instantiate FileStorage or ApiStorage
Svc->>Stor: loadTasks(tag)
Stor-->>Svc: tasks[]
Svc-->>Core: { tasks, total, filtered, storageType }
Core-->>CLI: ListTasksResult
CLI->>CLI: render results (text/json/compact)
CLI-->>User: output
sequenceDiagram
autonumber
participant User
participant CLI as @tm/cli (AuthCommand)
participant AM as AuthManager
participant OA as OAuthService
participant SB as SupabaseAuthClient
participant CS as CredentialStore
participant Browser
User->>CLI: tm auth login
CLI->>AM: getInstance()
AM->>OA: authenticate({ callbacks })
OA->>OA: start local callback server
OA->>Browser: open(auth URL)
note right of Browser: User authenticates in browser
Browser-->>OA: callback (code/state)
OA->>SB: exchangeCodeForSession(code)
SB-->>OA: session & user
OA->>CS: saveCredentials(credentials)
OA-->>AM: credentials
AM-->>CLI: credentials/status
CLI-->>User: success output
Estimated code review effort🎯 5 (Critical) | ⏱️ ~180 minutes Possibly related PRs
Pre-merge checks (3 passed)✅ Passed checks (3 passed)
✨ Finishing Touches🧪 Generate unit tests
Comment |
- add typescript - add npm workspaces
b9e3eec to
cf1687b
Compare
There was a problem hiding this comment.
Actionable comments posted: 213
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (13)
src/utils/rule-transformer.js (2)
228-233: Defensive: handle missing profile.fileMap.Avoid TypeError when profiles omit fileMap.
- const sourceFiles = Object.keys(profile.fileMap); + const fileMap = profile.fileMap || {}; + const sourceFiles = Object.keys(fileMap);Also reference fileMap instead of profile.fileMap below when reading target filenames.
195-196: Consistency: use shared log() instead of console.error.Keeps logging format and silent-mode behavior consistent.
- console.error(`Error converting rule file: ${error.message}`); + log('error', `[Rule Transformer] Error converting rule file: ${error.message}`);scripts/init.js (2)
620-637: Use the resolver here too for .gitignore to avoid path coupling.- try { - const gitignoreTemplatePath = path.join( - __dirname, - '..', - 'assets', - 'gitignore' - ); - const templateContent = fs.readFileSync(gitignoreTemplatePath, 'utf8'); + try { + const templateContent = readAsset('gitignore', 'utf8'); manageGitignoreFile( path.join(targetDir, GITIGNORE_FILE), templateContent, storeTasksInGit, log ); } catch (error) { log('error', `Failed to create .gitignore: ${error.message}`); }
269-283: Fix README overwrite logic.This overwrites the same README-task-master.md when it already exists. Intent is likely “if README.md exists, write a separate README-task-master.md”.
- // Handle README.md - offer to preserve or create a different file - if (filename === 'README-task-master.md') { + // Handle README.md - preserve user's README, write ours separately + if (filename === 'README.md') { log('info', `${targetPath} already exists`); // Create a separate README file specifically for this project const taskMasterReadmePath = path.join( path.dirname(targetPath), 'README-task-master.md' ); fs.writeFileSync(taskMasterReadmePath, content); log( 'success', `Created ${taskMasterReadmePath} (preserved original README-task-master.md)` ); return; }scripts/modules/config-manager.js (2)
691-701: Bug: Bedrock wrongly treated as “no API key required”Returning true for Bedrock skips credential checks. Bedrock needs AWS credentials; this causes false positives and downstream failures.
- const providersWithoutApiKeys = [ - CUSTOM_PROVIDERS.OLLAMA, - CUSTOM_PROVIDERS.BEDROCK, - CUSTOM_PROVIDERS.MCP, - CUSTOM_PROVIDERS.GEMINI_CLI - ]; + const providersWithoutApiKeys = [ + CUSTOM_PROVIDERS.OLLAMA, + CUSTOM_PROVIDERS.MCP, + CUSTOM_PROVIDERS.GEMINI_CLI + ];
997-1002: Keep exported “providersWithoutApiKeys” consistent with Bedrock fixRemove Bedrock here too to avoid consumers misinterpreting its status.
export const providersWithoutApiKeys = [ - CUSTOM_PROVIDERS.OLLAMA, - CUSTOM_PROVIDERS.BEDROCK, - CUSTOM_PROVIDERS.GEMINI_CLI, - CUSTOM_PROVIDERS.MCP + CUSTOM_PROVIDERS.OLLAMA, + CUSTOM_PROVIDERS.GEMINI_CLI, + CUSTOM_PROVIDERS.MCP ];scripts/modules/prompt-manager.js (2)
349-372: Fix each-loop item access and special variables; support{{this}}and{{@index}}Currently,
@index/@first/@lastwon’t interpolate (regex excludes@), and primitives in arrays can’t be referenced ({{.}}/{{this}}unsupported). Implementthisbinding and expand the variable regex.- rendered = rendered.replace( + rendered = rendered.replace( /\{\{#each\s+(\w+(?:\.\w+)*)\}\}([\s\S]*?)\{\{\/each\}\}/g, (match, path, content) => { const array = this.getNestedValue(variables, path); if (!Array.isArray(array)) return ''; return array .map((item, index) => { - // Create a context with item properties and special variables - const itemContext = { - ...variables, - ...item, - '@index': index, - '@first': index === 0, - '@last': index === array.length - 1 - }; + // Context for each item: bind item to `this` and expose special vars + const spreadable = item && typeof item === 'object' && !Array.isArray(item) ? item : {}; + const itemContext = { + ...variables, + ...spreadable, + this: item, + '@index': index, + '@first': index === 0, + '@last': index === array.length - 1 + }; // Recursively render the content with item context return this.renderTemplate(content, itemContext); }) .join(''); } ); - // Handle variable substitution {{variable}} - rendered = rendered.replace(/\{\{(\w+(?:\.\w+)*)\}\}/g, (match, path) => { + // Support {{.}} shorthand for current item + rendered = rendered.replace(/\{\{\.\}\}/g, () => { + const value = variables?.this; + return value !== undefined ? String(value) : ''; + }); + + // Handle variable substitution {{variable}} including @-prefixed and dot paths + rendered = rendered.replace(/\{\{(@?\w+(?:\.@?\w+)*)\}\}/g, (match, path) => { const value = this.getNestedValue(variables, path); return value !== undefined ? value : ''; });Add a follow-up unit test (updated in tests file) to assert
{{this}}and{{@index}}render correctly.Also applies to: 383-388
85-118: Cache key stability and unbounded growthJSON.stringify makes cacheKey order-sensitive; also Map grows unbounded.
- Normalize variables by stable stringify (e.g., JSON.stringify with sorted keys).
- Add a simple size cap.
+const MAX_CACHE_ENTRIES = 200; const cacheKey = `${promptId}-${JSON.stringify(variables)}-${variantKey}`; ... this.cache.set(cacheKey, rendered); +if (this.cache.size > MAX_CACHE_ENTRIES) { + const oldest = this.cache.keys().next().value; + this.cache.delete(oldest); +}mcp-server/src/index.js (2)
118-126: Bug: logging via session when session is falsyIn the else branch, session is falsy but is dereferenced, causing a crash. Log via the class logger instead.
- } else { - session.server.sendLoggingMessage({ - data: { - context: session.context, - message: `No MCP sessions available, providers not registered` - }, - level: 'warn' - }); - } + } else { + this.logger.warn('No MCP sessions available, providers not registered'); + }
5-5: Unused importfs appears unused after switching to JSON import for version. Remove it.
-import fs from 'fs';mcp-server/src/tools/utils.js (3)
178-193: Avoid logging whole session objectsThis logs a large serialized session, potentially including sensitive data. Log only minimal, relevant fields.
- log.info( - `Session object: ${JSON.stringify({ - hasSession: !!session, - hasRoots: !!session?.roots, - rootsType: typeof session?.roots, - isRootsArray: Array.isArray(session?.roots), - rootsLength: session?.roots?.length, - firstRoot: session?.roots?.[0], - hasRootsRoots: !!session?.roots?.roots, - rootsRootsType: typeof session?.roots?.roots, - isRootsRootsArray: Array.isArray(session?.roots?.roots), - rootsRootsLength: session?.roots?.roots?.length, - firstRootsRoot: session?.roots?.roots?.[0] - })}` - ); + log.debug( + `Session roots summary: primary=${session?.roots?.[0]?.uri ?? 'n/a'}, alt=${session?.roots?.roots?.[0]?.uri ?? 'n/a'}, count=${Array.isArray(session?.roots) ? session.roots.length : 0}` + );
300-305: Handle string error payloads from callersSome callers (e.g., executeTaskMasterCommand) return error as a string. Extract safely.
- if (!result.success) { - const errorMsg = result.error?.message || `Unknown ${errorPrefix}`; + if (!result.success) { + const errorMsg = + typeof result.error === 'string' + ? result.error + : result.error?.message || `Unknown ${errorPrefix}`; log.error(`${errorPrefix}: ${errorMsg}`); return createErrorResponse(errorMsg, versionInfo, tagInfo); }
357-376: Add timeout & buffer; correct fallback path
Add to
spawnOptions:timeout: 120_000, maxBuffer: 10 * 1024 * 1024for both
spawnSynccalls to prevent hanging processes and truncated output.Fallback invokes
node scripts/dev.js, but there is noscripts/dev.jsin the repo. Update the command to the actual script location (e.g.
path.join(__dirname, '../../../scripts/dev.js')
or adjust cwd) so the fallback path resolves correctly. [mcp-server/src/tools/utils.js:369–376]
♻️ Duplicate comments (1)
packages/tm-core/tests/unit/smoke.test.ts (1)
28-31: Avoid brittle name/version assertions.See suggested diff in subpath-exports.test.ts; same rationale applies here if duplicated.
| }, | ||
| "tm-core-phase-1": { | ||
| "tasks": [ | ||
| { | ||
| "id": 115, |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Do not manually edit .taskmaster/tasks/tasks.json — regenerate via Taskmaster CLI.
Per project rules, this file is generated. Please revert these manual additions and use Taskmaster commands (e.g., task-master add-task / update-task, tag context tm-core-phase-1) to create this phase and tasks so formatting, IDs, and metadata remain consistent.
I can draft the exact CLI sequence once you confirm the desired task list and tag name.
🤖 Prompt for AI Agents
.taskmaster/tasks/tasks.json around lines 7300-7304: this block was manually
edited but the file is generated; revert the manual changes and recreate the
phase/tasks using the Taskmaster CLI instead of editing the JSON directly — run
the Taskmaster commands to add or update the phase and tasks (e.g., use
task-master add-task / update-task and specify the tag name like
tm-core-phase-1) so IDs, formatting and metadata are produced correctly, then
commit the regenerated file.
| "metadata": { | ||
| "created": "2025-08-06T08:51:19.649Z", | ||
| "updated": "2025-08-20T21:32:21.837Z", | ||
| "description": "Tasks for tm-core-phase-1 context" | ||
| } |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Metadata timestamps should be tool-managed.
The updated timestamp predates this PR. If the CLI manages this field, it will correct it; if not, please align it via the generation workflow instead of manual edits.
🏁 Script executed:
#!/bin/bash
jq -r '.["tm-core-phase-1"].metadata' .taskmaster/tasks/tasks.jsonLength of output: 138
Remove manual metadata timestamp edits
Metadata timestamps in .taskmaster/tasks/tasks.json (e.g. the "updated" field 2025-08-20T21:32:21.837Z) must be managed by the MCP/CLI tools. Regenerate this file via the provided workflow to update these fields instead of editing them by hand.
🤖 Prompt for AI Agents
In .taskmaster/tasks/tasks.json around lines 8053 to 8057, you manually edited
metadata timestamp fields (e.g. the "updated" field 2025-08-20T21:32:21.837Z);
undo those manual changes and do not edit timestamps by hand — instead
regenerate the file using the project's MCP/CLI workflow (run the provided regen
command or workflow) so the tool updates timestamp fields correctly, then commit
the regenerated file.
| "json.format.enable": true, | ||
| "json.validate.enable": true | ||
| "json.validate.enable": true, | ||
| "typescript.tsdk": "node_modules/typescript/lib", |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Prefer explicit workspace-relative TS SDK path.
Using "./node_modules/typescript/lib" avoids ambiguity when the repo is opened from subfolders.
Apply this diff:
-"typescript.tsdk": "node_modules/typescript/lib",
+"typescript.tsdk": "./node_modules/typescript/lib",📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "typescript.tsdk": "node_modules/typescript/lib", | |
| "typescript.tsdk": "./node_modules/typescript/lib", |
🤖 Prompt for AI Agents
In .vscode/settings.json around line 14 the TypeScript SDK path is currently
"node_modules/typescript/lib"; change it to the explicit workspace-relative form
"./node_modules/typescript/lib" to avoid ambiguity when the repo is opened from
subfolders—update the string value accordingly so the setting uses the leading
"./".
| "allowJs": true, | ||
| "resolveJsonModule": true, |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Reassess allowJs at the root.
allowJs=true widens type-check scope to JS under bin/ and scripts/. If unintentional, move allowJs to only the packages that need it.
List JS files currently included by this config:
🏁 Script executed:
#!/bin/bash
echo "JS files under include globs:"
rg -n --glob 'bin/**' --glob 'scripts/**' --glob 'packages/*/src/**' --glob 'apps/*/src/**' -t js '' -S -nLength of output: 279626
Restrict allowJs to only the tsconfig files that need it
Root tsconfig.json’s "allowJs": true is pulling in JS under scripts/ (e.g. scripts/init.js, scripts/modules/update-config-tokens.js, scripts/test-claude-errors.js) for type-checking—remove it from the root and add "allowJs": true only in the package- or workspace-level tsconfig.json files that actually consume JS.
| "noEmit": true, | ||
| "baseUrl": "." | ||
| }, |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Avoid root-level baseUrl in a monorepo.
Setting baseUrl="." at the repo root can encourage cross-package absolute imports and confuse editors/builds. Prefer per-package baseUrl in each package’s tsconfig.
Apply this diff:
- "skipLibCheck": true,
- "noEmit": true,
- "baseUrl": "."
+ "skipLibCheck": true,
+ "noEmit": true📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "noEmit": true, | |
| "baseUrl": "." | |
| }, | |
| "skipLibCheck": true, | |
| "noEmit": true |
| "tsx": { | ||
| "tsconfig": { | ||
| "allowImportingTsExtensions": false | ||
| } | ||
| }, |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Remove the non-standard “tsx” block; it’s ignored by tsc and likely by tsx here.
TS’s tsconfig doesn’t recognize a “tsx” section. If you intended to configure the tsx runner, place it under package.json’s "tsx" key instead.
Apply this diff:
- "tsx": {
- "tsconfig": {
- "allowImportingTsExtensions": false
- }
- },📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "tsx": { | |
| "tsconfig": { | |
| "allowImportingTsExtensions": false | |
| } | |
| }, |
🤖 Prompt for AI Agents
In tsconfig.json around lines 17 to 21, remove the non-standard "tsx" block (it
is ignored by tsc); if those options were intended for the tsx runner, instead
add them under package.json's top-level "tsx" key (e.g., move the
allowImportingTsExtensions setting into package.json "tsx": { "tsconfig": {
"allowImportingTsExtensions": false } }), then validate with tsc/tsx to ensure
the setting takes effect.
| clean: true, | ||
| bundle: true, // Bundle everything into one file | ||
| outDir: 'dist', | ||
| publicDir: 'public', |
There was a problem hiding this comment.
tsup doesn’t support publicDir. Remove it.
publicDir is a Vite option, not tsup. It’s ignored/confusing here.
- publicDir: 'public',📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| publicDir: 'public', |
🤖 Prompt for AI Agents
In tsup.config.ts around line 31, remove the unsupported Vite option publicDir
which is not recognized by tsup; edit the config to delete the publicDir entry
and, if you need to copy static assets for the build, replace it with an
explicit step (e.g., a postbuild script or a copy plugin/command) to copy the
public folder into the dist output instead of using publicDir in tsup.
| loader: { | ||
| '.js': 'jsx', | ||
| '.ts': 'ts' | ||
| }, |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Use 'js' loader for .js files.
Setting '.js' to 'jsx' is unnecessary and risky for non-JSX Node code.
loader: {
- '.js': 'jsx',
+ '.js': 'js',
'.ts': 'ts'
},🤖 Prompt for AI Agents
In tsup.config.ts around lines 33 to 36 the loader mapping sets '.js' to 'jsx',
which can incorrectly transform non-JSX Node JS files; change the mapping so
'.js' uses the 'js' loader instead of 'jsx' (i.e., replace '.js': 'jsx' with
'.js': 'js') and keep '.ts': 'ts' as-is to avoid accidental JSX parsing for
plain .js files.
| @@ -0,0 +1,188 @@ | |||
| # Task Master Migration Roadmap | |||
|
|
|||
| ## Overview | |||
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Fix markdownlint violations (blank lines, code fence languages, trailing newline).
Add blank lines after headings, surround fenced code blocks with blank lines, specify a language for the ASCII diagram, and ensure the file ends with a newline. Sample fixes below; please apply consistently across the file.
-## Overview
-Gradual migration from scripts-based architecture to a clean monorepo with separated concerns.
+## Overview
+
+Gradual migration from scripts-based architecture to a clean monorepo with separated concerns.-```
+```text
┌─────────────────────────────────────────────────┐
│ User Interfaces │
...
└──────────┴──────────┴──────────┴────────────────┘
```diff
-### Phase 1: Core Extraction ✅ (In Progress)
-**Goal**: Move all business logic to @tm/core
+### Phase 1: Core Extraction ✅ (In Progress)
+
+**Goal**: Move all business logic to @tm/core
Also add a final trailing newline. Consider running: markdownlint --fix and/or prettier --write .taskmaster/docs/MIGRATION-ROADMAP.md for consistency.
Also applies to: 8-21, 25-25, 37-37, 48-48, 65-65, 73-73, 84-86, 96-103, 104-111, 120-121, 122-126, 127-131, 132-141, 142-146, 147-171, 172-181, 188-188
🧰 Tools
🪛 markdownlint-cli2 (0.17.2)
3-3: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
🤖 Prompt for AI Agents
.taskmaster/docs/MIGRATION-ROADMAP.md (lines 3 and affected ranges
8-21,25,37,48,65,73,84-86,96-103,104-111,120-121,122-126,127-131,132-141,142-146,147-171,172-181,188):
add a single blank line after each heading and ensure every fenced code block is
surrounded by a blank line before and after; for the ASCII diagram specify a
language (use ```text) on the opening fence; remove trailing whitespace and
ensure the file ends with a single trailing newline; after making these edits
run markdownlint --fix and/or prettier --write
.taskmaster/docs/MIGRATION-ROADMAP.md to apply consistent formatting.
| @@ -5,23 +5,31 @@ | |||
| "main": "index.js", | |||
There was a problem hiding this comment.
"main" points to a non-published file.
You publish only dist/**, but "main" is "index.js". This breaks require/import of the package.
- "main": "index.js",
+ "main": "dist/index.js",Alternatively, if this package is CLI-only, consider removing "main" entirely.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "main": "index.js", | |
| "main": "dist/index.js", |
🤖 Prompt for AI Agents
In package.json around line 5, the "main" field points to index.js which is not
published; update "main" to point to the built entry under dist (e.g.,
"dist/index.js") or remove the "main" field entirely if this package is
CLI-only, and ensure your package's "files" or publish configuration includes
the dist/ directory so the referenced file is actually published.
| "scripts": { | ||
| "build": "npm run build:packages && tsup", | ||
| "dev": "npm run build:packages && npm link && (npm run dev:packages & tsup --watch --onSuccess 'echo Build complete && npm link')", | ||
| "dev:packages": "(cd packages/tm-core && npm run dev) & (cd apps/cli && npm run dev) & wait", | ||
| "dev:core": "cd packages/tm-core && npm run dev", | ||
| "dev:cli": "cd apps/cli && npm run dev", | ||
| "build:packages": "npm run build:core && npm run build:cli", | ||
| "build:core": "cd packages/tm-core && npm run build", | ||
| "build:cli": "cd apps/cli && npm run build", | ||
| "test": "node --experimental-vm-modules node_modules/.bin/jest", | ||
| "test:fails": "node --experimental-vm-modules node_modules/.bin/jest --onlyFailures", | ||
| "test:watch": "node --experimental-vm-modules node_modules/.bin/jest --watch", | ||
| "test:coverage": "node --experimental-vm-modules node_modules/.bin/jest --coverage", | ||
| "test:e2e": "./tests/e2e/run_e2e.sh", | ||
| "test:e2e-report": "./tests/e2e/run_e2e.sh --analyze-log", | ||
| "prepare": "chmod +x bin/task-master.js mcp-server/server.js", | ||
| "postpack": "chmod +x dist/task-master.js dist/mcp-server.js", | ||
| "changeset": "changeset", | ||
| "release": "changeset publish", | ||
| "inspector": "npx @modelcontextprotocol/inspector node mcp-server/server.js", | ||
| "mcp-server": "node mcp-server/server.js", | ||
| "inspector": "npx @modelcontextprotocol/inspector node dist/mcp-server.js", | ||
| "mcp-server": "node dist/mcp-server.js", | ||
| "format-check": "biome format .", | ||
| "format": "biome format . --write" | ||
| }, |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Add standardized test scripts (unit/integration/ci) at root.
Conform to the repo’s test script schema.
"scripts": {
- "build": "npm run build:packages && tsup",
+ "build": "npm run build:packages && tsup",
"dev": "npm run build:packages && npm link && (npm run dev:packages & tsup --watch --onSuccess 'echo Build complete && npm link')",
"dev:packages": "(cd packages/tm-core && npm run dev) & (cd apps/cli && npm run dev) & wait",
"dev:core": "cd packages/tm-core && npm run dev",
"dev:cli": "cd apps/cli && npm run dev",
"build:packages": "npm run build:core && npm run build:cli",
"build:core": "cd packages/tm-core && npm run build",
"build:cli": "cd apps/cli && npm run build",
"test": "node --experimental-vm-modules node_modules/.bin/jest",
"test:fails": "node --experimental-vm-modules node_modules/.bin/jest --onlyFailures",
"test:watch": "node --experimental-vm-modules node_modules/.bin/jest --watch",
"test:coverage": "node --experimental-vm-modules node_modules/.bin/jest --coverage",
+ "test:unit": "npm run test -- --testPathPattern=tests/unit",
+ "test:integration": "npm run test -- --testPathPattern=tests/integration",
"test:e2e": "./tests/e2e/run_e2e.sh",
"test:e2e-report": "./tests/e2e/run_e2e.sh --analyze-log",
"postpack": "chmod +x dist/task-master.js dist/mcp-server.js",
"changeset": "changeset",
"release": "changeset publish",
"inspector": "npx @modelcontextprotocol/inspector node dist/mcp-server.js",
"mcp-server": "node dist/mcp-server.js",
"format-check": "biome format .",
- "format": "biome format . --write"
+ "format": "biome format . --write",
+ "test:ci": "npm run test:coverage"
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "scripts": { | |
| "build": "npm run build:packages && tsup", | |
| "dev": "npm run build:packages && npm link && (npm run dev:packages & tsup --watch --onSuccess 'echo Build complete && npm link')", | |
| "dev:packages": "(cd packages/tm-core && npm run dev) & (cd apps/cli && npm run dev) & wait", | |
| "dev:core": "cd packages/tm-core && npm run dev", | |
| "dev:cli": "cd apps/cli && npm run dev", | |
| "build:packages": "npm run build:core && npm run build:cli", | |
| "build:core": "cd packages/tm-core && npm run build", | |
| "build:cli": "cd apps/cli && npm run build", | |
| "test": "node --experimental-vm-modules node_modules/.bin/jest", | |
| "test:fails": "node --experimental-vm-modules node_modules/.bin/jest --onlyFailures", | |
| "test:watch": "node --experimental-vm-modules node_modules/.bin/jest --watch", | |
| "test:coverage": "node --experimental-vm-modules node_modules/.bin/jest --coverage", | |
| "test:e2e": "./tests/e2e/run_e2e.sh", | |
| "test:e2e-report": "./tests/e2e/run_e2e.sh --analyze-log", | |
| "prepare": "chmod +x bin/task-master.js mcp-server/server.js", | |
| "postpack": "chmod +x dist/task-master.js dist/mcp-server.js", | |
| "changeset": "changeset", | |
| "release": "changeset publish", | |
| "inspector": "npx @modelcontextprotocol/inspector node mcp-server/server.js", | |
| "mcp-server": "node mcp-server/server.js", | |
| "inspector": "npx @modelcontextprotocol/inspector node dist/mcp-server.js", | |
| "mcp-server": "node dist/mcp-server.js", | |
| "format-check": "biome format .", | |
| "format": "biome format . --write" | |
| }, | |
| "scripts": { | |
| "build": "npm run build:packages && tsup", | |
| "dev": "npm run build:packages && npm link && (npm run dev:packages & tsup --watch --onSuccess 'echo Build complete && npm link')", | |
| "dev:packages": "(cd packages/tm-core && npm run dev) & (cd apps/cli && npm run dev) & wait", | |
| "dev:core": "cd packages/tm-core && npm run dev", | |
| "dev:cli": "cd apps/cli && npm run dev", | |
| "build:packages": "npm run build:core && npm run build:cli", | |
| "build:core": "cd packages/tm-core && npm run build", | |
| "build:cli": "cd apps/cli && npm run build", | |
| "test": "node --experimental-vm-modules node_modules/.bin/jest", | |
| "test:fails": "node --experimental-vm-modules node_modules/.bin/jest --onlyFailures", | |
| "test:watch": "node --experimental-vm-modules node_modules/.bin/jest --watch", | |
| "test:coverage": "node --experimental-vm-modules node_modules/.bin/jest --coverage", | |
| "test:unit": "npm run test -- --testPathPattern=tests/unit", | |
| "test:integration": "npm run test -- --testPathPattern=tests/integration", | |
| "test:e2e": "./tests/e2e/run_e2e.sh", | |
| "test:e2e-report": "./tests/e2e/run_e2e.sh --analyze-log", | |
| "postpack": "chmod +x dist/task-master.js dist/mcp-server.js", | |
| "changeset": "changeset", | |
| "release": "changeset publish", | |
| "inspector": "npx @modelcontextprotocol/inspector node dist/mcp-server.js", | |
| "mcp-server": "node dist/mcp-server.js", | |
| "format-check": "biome format .", | |
| "format": "biome format . --write", | |
| "test:ci": "npm run test:coverage" | |
| }, |
🤖 Prompt for AI Agents
package.json lines 13-35: the root package.json lacks the standardized test
scripts required by the repo schema (unit, integration, ci); add new scripts
named "test:unit", "test:integration" and "test:ci" that run the appropriate
jest/mocha commands with the repo’s test patterns and environment flags (e.g.,
node --experimental-vm-modules node_modules/.bin/jest) or call existing
package-specific test scripts, and ensure "test" delegates to "test:ci" or
"test:unit" per schema; also update any composite scripts (like
test:watch/test:coverage) to reference the unit/integration scripts if needed so
the root conforms to the repository test script naming and behavior conventions.
| "version": "1.0.0", | ||
| "description": "Core library for Task Master - TypeScript task management system", | ||
| "type": "module", | ||
| "types": "./dist/index.d.ts", |
There was a problem hiding this comment.
Types resolution and CJS exports mismatch.
- Align with repo preference to point "types" to src .ts for internal packages.
- "exports.*.require" currently points to ESM files; CommonJS consumers will fail unless you emit CJS builds.
- "types": "./dist/index.d.ts",
+ "types": "./src/index.ts",Option A (dual build, preferred):
- Ensure tsup emits both formats (tsup --format esm,cjs or tsup.config.ts).
- Point require entries to .cjs outputs:
- ".": { "types": "./src/index.ts", "import": "./dist/index.js", "require": "./dist/index.js" },
+ ".": { "types": "./src/index.ts", "import": "./dist/index.js", "require": "./dist/index.cjs" },
- "./auth": { "types": "./src/auth/index.ts", "import": "./dist/auth/index.js", "require": "./dist/auth/index.js" },
+ "./auth": { "types": "./src/auth/index.ts", "import": "./dist/auth/index.js", "require": "./dist/auth/index.cjs" },
- "./storage": { "types": "./src/storage/index.ts", "import": "./dist/storage/index.js", "require": "./dist/storage/index.js" },
+ "./storage": { "types": "./src/storage/index.ts", "import": "./dist/storage/index.js", "require": "./dist/storage/index.cjs" },
...repeat for each subpath...Option B (ESM-only):
- Drop all "require" keys from exports.
Also applies to: 9-13, 14-64
🤖 Prompt for AI Agents
In packages/tm-core/package.json around line 6 (and also affecting lines 9-13
and 14-64), the "types" field points to the compiled .d.ts and the
"exports.*.require" entries point to ESM files causing CJS consumers to fail;
update "types" to point at the TypeScript source (e.g., src/index.ts) for
internal packages, and implement the preferred Option A by configuring tsup to
emit both esm and cjs outputs (tsup --format esm,cjs or via tsup.config.ts) and
then change the require export entries to reference the generated .cjs files;
alternatively, if you choose ESM-only, remove all "require" keys from exports
(ensure package.json entries and build outputs remain consistent with the chosen
option).
| "scripts": { | ||
| "build": "tsup", | ||
| "dev": "tsup --watch", | ||
| "test": "vitest run", | ||
| "test:watch": "vitest", | ||
| "test:coverage": "vitest run --coverage", | ||
| "lint": "biome check --write", | ||
| "lint:check": "biome check", | ||
| "lint:fix": "biome check --fix --unsafe", | ||
| "format": "biome format --write", | ||
| "format:check": "biome format", | ||
| "typecheck": "tsc --noEmit" | ||
| }, |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Add the standardized test scripts.
Per repository guidelines, include unit/integration/e2e/CI variants.
"scripts": {
- "build": "tsup",
+ "build": "tsup",
"dev": "tsup --watch",
"test": "vitest run",
"test:watch": "vitest",
"test:coverage": "vitest run --coverage",
+ "test:unit": "vitest run",
+ "test:integration": "vitest run --dir tests/integration",
+ "test:e2e": "vitest run --dir tests/e2e",
+ "test:ci": "vitest run --coverage",
"lint": "biome check --write",
"lint:check": "biome check",
"lint:fix": "biome check --fix --unsafe",
"format": "biome format --write",
"format:check": "biome format",
"typecheck": "tsc --noEmit"
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "scripts": { | |
| "build": "tsup", | |
| "dev": "tsup --watch", | |
| "test": "vitest run", | |
| "test:watch": "vitest", | |
| "test:coverage": "vitest run --coverage", | |
| "lint": "biome check --write", | |
| "lint:check": "biome check", | |
| "lint:fix": "biome check --fix --unsafe", | |
| "format": "biome format --write", | |
| "format:check": "biome format", | |
| "typecheck": "tsc --noEmit" | |
| }, | |
| "scripts": { | |
| "build": "tsup", | |
| "dev": "tsup --watch", | |
| "test": "vitest run", | |
| "test:watch": "vitest", | |
| "test:coverage": "vitest run --coverage", | |
| "test:unit": "vitest run", | |
| "test:integration": "vitest run --dir tests/integration", | |
| "test:e2e": "vitest run --dir tests/e2e", | |
| "test:ci": "vitest run --coverage", | |
| "lint": "biome check --write", | |
| "lint:check": "biome check", | |
| "lint:fix": "biome check --fix --unsafe", | |
| "format": "biome format --write", | |
| "format:check": "biome format", | |
| "typecheck": "tsc --noEmit" | |
| }, |
🤖 Prompt for AI Agents
In packages/tm-core/package.json around lines 66 to 78, the scripts block needs
the standardized test script variants required by repo guidelines; add explicit
"test:unit", "test:integration", "test:e2e" and "test:ci" script entries (and
keep existing top-level "test" and "test:watch" as aliases) that map to the
vitest commands with appropriate flags/environment variables (e.g. run vitest
with a unit-specific config or grep/tag filters for unit, similar for
integration and e2e, and a CI variant that runs tests headlessly with coverage
and fail-on-first or --run). Ensure names match repo standard and that "test:ci"
is used by CI pipelines.
| it('should calculate exponential backoff delays', () => { | ||
| const provider = new MockProvider({ apiKey: 'test-key' }); | ||
|
|
||
| // Access protected method through type assertion | ||
| const calculateDelay = (provider as any).calculateBackoffDelay.bind( | ||
| provider | ||
| ); | ||
|
|
||
| const delay1 = calculateDelay(1); | ||
| const delay2 = calculateDelay(2); | ||
| const delay3 = calculateDelay(3); | ||
|
|
||
| // Check exponential growth (with jitter, so use ranges) | ||
| expect(delay1).toBeGreaterThanOrEqual(900); | ||
| expect(delay1).toBeLessThanOrEqual(1100); | ||
|
|
||
| expect(delay2).toBeGreaterThanOrEqual(1800); | ||
| expect(delay2).toBeLessThanOrEqual(2200); | ||
|
|
||
| expect(delay3).toBeGreaterThanOrEqual(3600); | ||
| expect(delay3).toBeLessThanOrEqual(4400); | ||
| }); |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Stabilize backoff delay test by stubbing Math.random.
Jitter makes ranges flaky on slow CI. Stub to a fixed value.
-const delay1 = calculateDelay(1);
+const randSpy = vi.spyOn(Math, 'random').mockReturnValue(0.0);
+const delay1 = calculateDelay(1);
const delay2 = calculateDelay(2);
const delay3 = calculateDelay(3);
+randSpy.mockRestore();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| it('should calculate exponential backoff delays', () => { | |
| const provider = new MockProvider({ apiKey: 'test-key' }); | |
| // Access protected method through type assertion | |
| const calculateDelay = (provider as any).calculateBackoffDelay.bind( | |
| provider | |
| ); | |
| const delay1 = calculateDelay(1); | |
| const delay2 = calculateDelay(2); | |
| const delay3 = calculateDelay(3); | |
| // Check exponential growth (with jitter, so use ranges) | |
| expect(delay1).toBeGreaterThanOrEqual(900); | |
| expect(delay1).toBeLessThanOrEqual(1100); | |
| expect(delay2).toBeGreaterThanOrEqual(1800); | |
| expect(delay2).toBeLessThanOrEqual(2200); | |
| expect(delay3).toBeGreaterThanOrEqual(3600); | |
| expect(delay3).toBeLessThanOrEqual(4400); | |
| }); | |
| it('should calculate exponential backoff delays', () => { | |
| const provider = new MockProvider({ apiKey: 'test-key' }); | |
| // Access protected method through type assertion | |
| const calculateDelay = (provider as any).calculateBackoffDelay.bind( | |
| provider | |
| ); | |
| // Stabilize jitter by stubbing Math.random to a fixed value | |
| const randSpy = vi.spyOn(Math, 'random').mockReturnValue(0.0); | |
| const delay1 = calculateDelay(1); | |
| const delay2 = calculateDelay(2); | |
| const delay3 = calculateDelay(3); | |
| randSpy.mockRestore(); | |
| // Check exponential growth (with jitter, so use ranges) | |
| expect(delay1).toBeGreaterThanOrEqual(900); | |
| expect(delay1).toBeLessThanOrEqual(1100); | |
| expect(delay2).toBeGreaterThanOrEqual(1800); | |
| expect(delay2).toBeLessThanOrEqual(2200); | |
| expect(delay3).toBeGreaterThanOrEqual(3600); | |
| expect(delay3).toBeLessThanOrEqual(4400); | |
| }); |
🤖 Prompt for AI Agents
In packages/tm-core/tests/unit/base-provider.test.ts around lines 127 to 148,
the backoff delay assertions are flaky due to jitter from Math.random; stub
Math.random to a fixed value before calling the protected calculateBackoffDelay
(e.g., use jest.spyOn(Math, 'random').mockReturnValue(0.5) or similar) so delays
are deterministic, run the three delay calculations, then restore the original
Math.random (mockRestore) after the assertions (or use try/finally) to avoid
affecting other tests.
| describe('template method pattern', () => { | ||
| it('should follow the template method flow', async () => { | ||
| const provider = new MockProvider({ | ||
| apiKey: 'test-key', | ||
| responseDelay: 50 | ||
| }); | ||
|
|
||
| const startTime = Date.now(); | ||
| const response = await provider.generateCompletion('Test prompt', { | ||
| temperature: 0.5, | ||
| maxTokens: 100 | ||
| }); | ||
| const endTime = Date.now(); | ||
|
|
||
| // Verify the response was processed through the template | ||
| expect(response.content).toBeDefined(); | ||
| expect(response.duration).toBeGreaterThanOrEqual(50); | ||
| expect(response.duration).toBeLessThanOrEqual(endTime - startTime + 10); | ||
| expect(response.timestamp).toBeDefined(); | ||
| expect(response.provider).toBe('mock'); | ||
| }); | ||
| }); |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Reduce timing flakiness with fake timers.
Use vi.useFakeTimers and advance timers by responseDelay to assert duration deterministically.
🤖 Prompt for AI Agents
In packages/tm-core/tests/unit/base-provider.test.ts around lines 243-264, the
test is flaky due to real timers; switch to fake timers by calling
vi.useFakeTimers() before invoking provider.generateCompletion, start the async
call, then advance timers deterministically with
vi.advanceTimersByTime(responseDelay) (or responseDelay + 1 to be safe), await
the pending promise (e.g., await Promise.resolve() or use
vi.runAllTimersAsync()), then restore timers with
vi.useRealTimers()/vi.restoreAllMocks(); update assertions to expect
response.duration equals the deterministic delay instead of relying on Date.now
ranges.
| describe('placeholder storage', () => { | ||
| it('should perform basic storage operations', async () => { | ||
| const storage = new PlaceholderStorage(); | ||
| const testPath = 'test/path'; | ||
| const testData = 'test data'; | ||
|
|
||
| // Initially should not exist | ||
| expect(await storage.exists(testPath)).toBe(false); | ||
| expect(await storage.read(testPath)).toBe(null); | ||
|
|
||
| // Write and verify | ||
| await storage.write(testPath, testData); | ||
| expect(await storage.exists(testPath)).toBe(true); | ||
| expect(await storage.read(testPath)).toBe(testData); | ||
|
|
||
| // Delete and verify | ||
| await storage.delete(testPath); | ||
| expect(await storage.exists(testPath)).toBe(false); | ||
| }); |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Consider adding negative-path tests for storage.
E.g., read after delete throws/returns null consistently; write empty data behavior.
I can add targeted cases if desired.
🤖 Prompt for AI Agents
In packages/tm-core/tests/unit/smoke.test.ts around lines 53-71, add
negative-path tests for PlaceholderStorage: add one test that after deleting a
key a subsequent read returns null (or the implementation's canonical “not
found” value) to assert consistent post-delete behavior, and add a second test
verifying behavior when writing empty input — write an empty string and assert
read returns an empty string, and attempt to write undefined/null and assert it
throws or is rejected per the storage API; keep assertions aligned to the
current PlaceholderStorage contract.
| describe('placeholder parser', () => { | ||
| it('should parse simple task lists', async () => { | ||
| const parser = new PlaceholderParser(); | ||
| const content = ` | ||
| - Task 1 | ||
| - Task 2 | ||
| - Task 3 | ||
| `; | ||
|
|
||
| const isValid = await parser.validate(content); | ||
| expect(isValid).toBe(true); | ||
|
|
||
| const tasks = await parser.parse(content); | ||
| expect(tasks).toHaveLength(3); | ||
| expect(tasks[0]?.title).toBe('Task 1'); | ||
| expect(tasks[1]?.title).toBe('Task 2'); | ||
| expect(tasks[2]?.title).toBe('Task 3'); | ||
|
|
||
| tasks.forEach((task) => { | ||
| expect(task.status).toBe('pending'); | ||
| expect(task.priority).toBe('medium'); | ||
| }); | ||
| }); |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Parser tests: add invalid-content validation case.
Assert validate('') === false and parse('') returns [].
🤖 Prompt for AI Agents
In packages/tm-core/tests/unit/smoke.test.ts around lines 74 to 96, add
assertions to cover invalid-content behavior: call parser.validate('') and
assert it returns false, and call parser.parse('') and assert it returns an
empty array. Update the test case (or add a new one) to instantiate
PlaceholderParser, then await parser.validate('') === false and await
parser.parse('') toEqual([]), keeping existing setup/teardown intact and
ensuring promises are awaited.
| import { log, readJSON } from './utils.js'; | ||
| // Import new commands from @tm/cli | ||
| import { ListTasksCommand, AuthCommand } from '@tm/cli'; | ||
|
|
There was a problem hiding this comment.
Build break: static import of '@tm/cli' causes module-not-found in CI.
Don’t hard-crash the CLI when the optional CLI package isn’t present. Use guarded dynamic import and degrade gracefully.
Apply:
-import { log, readJSON } from './utils.js';
-// Import new commands from @tm/cli
-import { ListTasksCommand, AuthCommand } from '@tm/cli';
+import { log, readJSON } from './utils.js';
+// External CLI commands are optional; they will be loaded dynamically in registerCommands()Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In scripts/modules/commands.js around lines 17 to 20, the static import "import
{ ListTasksCommand, AuthCommand } from '@tm/cli';" causes CI failures when
'@tm/cli' is optional; replace it with a guarded dynamic import using try/catch
(or top-level await if supported) so the module is only loaded when present:
attempt to await import('@tm/cli') and, on success, extract the commands,
otherwise fall back to safe no-op placeholders or undefined exports and log a
warning; ensure downstream code that registers/uses these commands checks for
their presence before calling them so the CLI degrades gracefully instead of
hard-crashing.
| { | ||
| "name": "@tm/cli", | ||
| "version": "1.0.0", | ||
| "description": "Task Master CLI - Command line interface for task management", | ||
| "type": "module", | ||
| "main": "./dist/index.js", | ||
| "types": "./dist/index.d.ts", | ||
| "exports": { | ||
| ".": { | ||
| "types": "./src/index.ts", | ||
| "import": "./dist/index.js", | ||
| "require": "./dist/index.js" | ||
| } | ||
| }, | ||
| "files": ["dist", "README.md"], | ||
| "scripts": { | ||
| "build": "tsup", | ||
| "dev": "tsup --watch", | ||
| "typecheck": "tsc --noEmit", | ||
| "lint": "biome check src", | ||
| "format": "biome format --write src", | ||
| "test": "vitest run", | ||
| "test:watch": "vitest" | ||
| }, | ||
| "dependencies": { | ||
| "@tm/core": "*", | ||
| "boxen": "^7.1.1", | ||
| "chalk": "^5.3.0", | ||
| "cli-table3": "^0.6.5", | ||
| "commander": "^12.1.0", | ||
| "inquirer": "^9.2.10", | ||
| "open": "^10.2.0", | ||
| "ora": "^8.1.0" | ||
| }, | ||
| "devDependencies": { | ||
| "@biomejs/biome": "^1.9.4", | ||
| "@types/inquirer": "^9.0.3", | ||
| "@types/node": "^22.10.5", | ||
| "tsup": "^8.3.0", | ||
| "tsx": "^4.20.4", | ||
| "typescript": "^5.7.3", | ||
| "vitest": "^2.1.8" | ||
| }, | ||
| "engines": { | ||
| "node": ">=18.0.0" | ||
| }, | ||
| "keywords": ["task-master", "cli", "task-management", "productivity"], | ||
| "author": "", | ||
| "license": "MIT" | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Consider adding a bin entry (task-master / tm) for eventual direct CLI execution.
If the intent is to run the CLI directly from this package, add a "bin" mapping; otherwise ignore.
🤖 Prompt for AI Agents
In apps/cli/package.json around lines 1 to 50, there is no "bin" entry for
installing or invoking the CLI directly; add a "bin" field mapping the desired
command names (e.g., "task-master" and/or "tm") to the package entrypoint
(./dist/index.js) so npm/yarn will create executable shims on install, ensure
the built dist/index.js includes a proper CLI shebang (#!/usr/bin/env node) and
is included in "files" so the executable is published, and update build steps if
necessary to preserve the shebang and make the output executable.
| "types": "./dist/index.d.ts", | ||
| "exports": { | ||
| ".": { | ||
| "types": "./src/index.ts", | ||
| "import": "./dist/index.js", | ||
| "require": "./dist/index.js" | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Unify types entry with monorepo DX preference (point to src).
Per prior preference, internal packages should point "types" to src.
-"types": "./dist/index.d.ts",
+"types": "./src/index.ts",
"exports": {
".": {
- "types": "./src/index.ts",
+ "types": "./src/index.ts",
"import": "./dist/index.js",
"require": "./dist/index.js"
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "types": "./dist/index.d.ts", | |
| "exports": { | |
| ".": { | |
| "types": "./src/index.ts", | |
| "import": "./dist/index.js", | |
| "require": "./dist/index.js" | |
| } | |
| "types": "./src/index.ts", | |
| "exports": { | |
| ".": { | |
| "types": "./src/index.ts", | |
| "import": "./dist/index.js", | |
| "require": "./dist/index.js" | |
| } |
🤖 Prompt for AI Agents
In apps/cli/package.json around lines 7 to 13, the top-level "types" field
currently points to "./dist/index.d.ts" but the monorepo convention is to point
types at source; update the "types" entry to "./src/index.ts" so it matches the
package exports and monorepo DX preference, ensuring the path is correct and
consistent with the existing exports block.
| "exports": { | ||
| ".": { | ||
| "types": "./src/index.ts", | ||
| "import": "./dist/index.js", | ||
| "require": "./dist/index.js" | ||
| } |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Verify CJS export actually exists if you expose "require".
Package is ESM ("type": "module"). If tsup does not emit CJS, the "require" condition pointing to an ESM file will break require() consumers.
If CJS isn't emitted, drop the "require" condition or emit dual formats.
Also applies to: 39-43
🏁 Script executed:
#!/bin/bash
# Check tsup config and built formats
fd tsup.config -t file | xargs -I{} cat {}
rg -nP '"format"\s*:\s*' apps/cli/tsup.config.* || true
rg -nP 'module\.exports|cjs' apps/cli/dist || trueLength of output: 3269
Fix CJS export path or drop require
In apps/cli/package.json exports you point "require": "./dist/index.js", but the tsup config under apps/cli only outputs ESM—no CJS file exists. Either update apps/cli/tsup.config.ts to include format: ['cjs','esm'] and change the "require" path to "./dist/index.cjs", or remove the require condition (and do the same for any other subpath exports).
🤖 Prompt for AI Agents
In apps/cli/package.json around lines 8-13, the "require" export points to
./dist/index.js but the tsup build only emits ESM, so fix by either (A) updating
apps/cli/tsup.config.ts to build both formats (format: ['cjs','esm']) and change
the "require" export to point to the generated CJS file (e.g. ./dist/index.cjs),
or (B) remove the "require" condition from exports (and from any other subpath
exports) so only ESM remains; apply the chosen change consistently across
package.json and tsup config.
| ## Overview | ||
| We're using Commander.js's native class pattern instead of custom abstractions. This is cleaner, more maintainable, and uses the framework as designed. |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Add blank lines after headings and around code fences.
Resolves MD022/MD031 violations called out by markdownlint.
-## Overview
-We're using Commander.js's native class pattern instead of custom abstractions. This is cleaner, more maintainable, and uses the framework as designed.
+## Overview
+
+We're using Commander.js's native class pattern instead of custom abstractions. This is cleaner, more maintainable, and uses the framework as designed.Apply the same pattern for the sections “In New Code”, “In Existing Scripts”, and “Programmatic Usage” by ensuring a blank line after the heading and before/after the fenced blocks.
Also applies to: 98-104, 105-111, 112-117
🧰 Tools
🪛 markdownlint-cli2 (0.17.2)
3-3: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
🤖 Prompt for AI Agents
In docs/CLI-COMMANDER-PATTERN.md around lines 3-4 (and also apply same fix to
ranges 98-104, 105-111, 112-117), headings and fenced code blocks are missing
blank lines which triggers MD022/MD031; add a single blank line immediately
after each heading and ensure there is a blank line before and after each fenced
code block in the “In New Code”, “In Existing Scripts”, and “Programmatic Usage”
sections so headings are followed by an empty line and fenced blocks are
separated by blank lines above and below.
| ``` | ||
| @tm/core (Business Logic) @tm/cli (Presentation) | ||
| ┌─────────────────────┐ ┌──────────────────────────┐ | ||
| │ TaskMasterCore │◄───────────│ ListTasksCommand │ | ||
| │ - getTaskList() │ │ extends Commander.Command│ | ||
| │ - getTask() │ │ - display logic only │ | ||
| │ - getNextTask() │ │ - formatting │ | ||
| └─────────────────────┘ └──────────────────────────┘ | ||
| ▲ ▲ | ||
| │ │ | ||
| └──────── Gets Data ──────────────────┘ | ||
| Displays Data | ||
| ``` |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Fix markdownlint: add language to fenced diagram block.
Label the ASCII diagram as text to satisfy MD040.
-```
+```text
@tm/core (Business Logic) @tm/cli (Presentation)
...
<details>
<summary>🧰 Tools</summary>
<details>
<summary>🪛 markdownlint-cli2 (0.17.2)</summary>
8-8: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
</details>
</details>
<details>
<summary>🤖 Prompt for AI Agents</summary>
In docs/CLI-COMMANDER-PATTERN.md around lines 8 to 20, the fenced ASCII diagram
block is missing a language tag which triggers markdownlint MD040; update the
opening fence from totext so the block is treated as plain text,
preserving the diagram content and spacing, then save.
</details>
<!-- fingerprinting:phantom:poseidon:chinchilla -->
<!-- This is an auto-generated comment by CodeRabbit -->
| private configManager: ConfigManager; | ||
| private taskService: TaskService; | ||
|
|
||
| /** | ||
| * Create and initialize a new TaskMasterCore instance | ||
| * This is the ONLY way to create a TaskMasterCore | ||
| * | ||
| * @param options - Configuration options for TaskMasterCore | ||
| * @returns Fully initialized TaskMasterCore instance | ||
| */ | ||
| static async create(options: TaskMasterCoreOptions): Promise<TaskMasterCore> { | ||
| const instance = new TaskMasterCore(); | ||
| await instance.initialize(options); | ||
| return instance; | ||
| } | ||
|
|
||
| /** | ||
| * Private constructor - use TaskMasterCore.create() instead | ||
| * This ensures the TaskMasterCore is always properly initialized | ||
| */ | ||
| private constructor() { | ||
| // Services will be initialized in the initialize() method | ||
| this.configManager = null as any; | ||
| this.taskService = null as any; | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Avoid null as any; use definite assignment.
Improves type safety without runtime cost.
Apply:
export class TaskMasterCore {
- private configManager: ConfigManager;
- private taskService: TaskService;
+ private configManager!: ConfigManager;
+ private taskService!: TaskService;
...
- private constructor() {
- // Services will be initialized in the initialize() method
- this.configManager = null as any;
- this.taskService = null as any;
- }
+ private constructor() {}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| private configManager: ConfigManager; | |
| private taskService: TaskService; | |
| /** | |
| * Create and initialize a new TaskMasterCore instance | |
| * This is the ONLY way to create a TaskMasterCore | |
| * | |
| * @param options - Configuration options for TaskMasterCore | |
| * @returns Fully initialized TaskMasterCore instance | |
| */ | |
| static async create(options: TaskMasterCoreOptions): Promise<TaskMasterCore> { | |
| const instance = new TaskMasterCore(); | |
| await instance.initialize(options); | |
| return instance; | |
| } | |
| /** | |
| * Private constructor - use TaskMasterCore.create() instead | |
| * This ensures the TaskMasterCore is always properly initialized | |
| */ | |
| private constructor() { | |
| // Services will be initialized in the initialize() method | |
| this.configManager = null as any; | |
| this.taskService = null as any; | |
| } | |
| export class TaskMasterCore { | |
| private configManager!: ConfigManager; | |
| private taskService!: TaskService; | |
| /** | |
| * Create and initialize a new TaskMasterCore instance | |
| * This is the ONLY way to create a TaskMasterCore | |
| * | |
| * @param options - Configuration options for TaskMasterCore | |
| * @returns Fully initialized TaskMasterCore instance | |
| */ | |
| static async create(options: TaskMasterCoreOptions): Promise<TaskMasterCore> { | |
| const instance = new TaskMasterCore(); | |
| await instance.initialize(options); | |
| return instance; | |
| } | |
| /** | |
| * Private constructor - use TaskMasterCore.create() instead | |
| * This ensures the TaskMasterCore is always properly initialized | |
| */ | |
| private constructor() {} | |
| } |
🤖 Prompt for AI Agents
packages/tm-core/src/task-master-core.ts around lines 34 to 58: the class
currently assigns properties using `null as any` in the private constructor
which defeats TypeScript safety; instead remove those ad-hoc null assignments
and declare the properties with definite assignment assertions (e.g., `private
configManager!: ConfigManager; private taskService!: TaskService;`) so the
compiler knows they will be initialized in initialize(); leave the constructor
empty (or remove it if not needed) and ensure initialize() sets both properties
before use.
| async close(): Promise<void> { | ||
| // TaskService handles storage cleanup internally | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
close() is a no-op; ensure resources are released.
If storage allocates resources (file handles, watchers, network clients), delegate to the underlying service.
Apply minimal guard now; follow up by adding a typed close() on TaskService:
async close(): Promise<void> {
- // TaskService handles storage cleanup internally
+ // TODO: add a typed close() on TaskService and remove the cast
+ await (this.taskService as unknown as { close?: () => Promise<void> })?.close?.();
}I can add TaskService.close() and wire it to storage cleanup if you want.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async close(): Promise<void> { | |
| // TaskService handles storage cleanup internally | |
| } | |
| async close(): Promise<void> { | |
| // TODO: add a typed close() on TaskService and remove the cast | |
| await (this.taskService as unknown as { close?: () => Promise<void> })?.close?.(); | |
| } |
🤖 Prompt for AI Agents
In packages/tm-core/src/task-master-core.ts around lines 176 to 178, the async
close() is currently a no-op; update it to delegate resource cleanup to the
underlying TaskService by checking for and calling a close() method if present.
Implement a minimal guard: if (this.taskService?.close) await
this.taskService.close(); wrap the await in a try/catch to log or rethrow errors
as appropriate and ensure the method is safe to call multiple times (idempotent)
— keep the change minimal now and leave adding a typed TaskService.close()
signature as a follow-up.
| * Provides functions to generate unique identifiers for tasks and subtasks | ||
| */ | ||
|
|
||
| import { randomBytes } from 'node:crypto'; |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Align core module import style with repo preference (avoid node: scheme).
Per retrieved learnings from Crunchyman-ralph, switch to bare core import for consistency.
Apply:
-import { randomBytes } from 'node:crypto';
+import { randomBytes } from 'crypto';(This also applies to tests importing node:fs below.)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import { randomBytes } from 'node:crypto'; | |
| import { randomBytes } from 'crypto'; |
🤖 Prompt for AI Agents
In packages/tm-core/src/utils/id-generator.ts around line 6, the import uses the
node: scheme ("node:crypto"); change it to the bare core import ("crypto") to
match repository style and consistency. Update the import statement accordingly,
and scan related test files (e.g., those using "node:fs") to replace node:
scheme core imports with their bare counterparts as well.
| const parentSubtasks = existingSubtasks.filter((id) => | ||
| id.startsWith(`${parentId}.`) | ||
| ); | ||
|
|
||
| // Extract sequential numbers and find the highest | ||
| const sequentialNumbers = parentSubtasks | ||
| .map((id) => { | ||
| const parts = id.split('.'); | ||
| const lastPart = parts[parts.length - 1]; | ||
| return Number.parseInt(lastPart, 10); | ||
| }) | ||
| .filter((num) => !Number.isNaN(num)) | ||
| .sort((a, b) => a - b); | ||
|
|
||
| // Determine the next sequential number | ||
| const nextSequential = | ||
| sequentialNumbers.length > 0 ? Math.max(...sequentialNumbers) + 1 : 1; | ||
|
|
||
| return `${parentId}.${nextSequential}`; |
There was a problem hiding this comment.
Bug: next subtask index incorrectly considers nested subtasks.
The current scan includes grandchildren (e.g., parent.1.2), which can skip numbers for direct children (next becomes 3 instead of 2).
Apply this diff to only consider direct children one level deep:
- // Find existing subtasks for this parent
- const parentSubtasks = existingSubtasks.filter((id) =>
- id.startsWith(`${parentId}.`)
- );
-
- // Extract sequential numbers and find the highest
- const sequentialNumbers = parentSubtasks
- .map((id) => {
- const parts = id.split('.');
- const lastPart = parts[parts.length - 1];
- return Number.parseInt(lastPart, 10);
- })
- .filter((num) => !Number.isNaN(num))
- .sort((a, b) => a - b);
+ // Find direct children only: `${parentId}.<number>`
+ const directChildPattern = new RegExp(`^${escapeRegExp(parentId)}\\.\\d+$`);
+ const directChildren = existingSubtasks.filter((id) => directChildPattern.test(id));
+
+ // Extract direct child indices and find the highest
+ const sequentialNumbers = directChildren
+ .map((id) => Number.parseInt(id.substring(id.lastIndexOf('.') + 1), 10))
+ .filter((num) => !Number.isNaN(num));Add this helper in the module (outside the shown range):
function escapeRegExp(str: string): string {
return str.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
}🤖 Prompt for AI Agents
In packages/tm-core/src/utils/id-generator.ts around lines 41 to 59, the logic
that finds sibling sequential numbers incorrectly includes nested grandchildren
(e.g., parent.1.2) so the next child index can be skipped; restrict matching to
direct children only by either (a) filtering existingSubtasks to those that
start with the escaped parentId plus a single dot and contain exactly one
additional numeric segment (check split('.') length equals
parentId.split('.').length + 1 and that the last segment is a number), or (b)
build a regex using an escapeRegExp helper (add the provided escapeRegExp
function somewhere in the module) that matches
`^${escapeRegExp(parentId)}\\.[0-9]+$`, then extract and parse the trailing
number to compute the max + 1 and return `${parentId}.${nextSequential}`.
| function generateRandomString(length: number): string { | ||
| const chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'; | ||
| const bytes = randomBytes(length); | ||
| let result = ''; | ||
|
|
||
| for (let i = 0; i < length; i++) { | ||
| result += chars[bytes[i] % chars.length]; | ||
| } | ||
|
|
||
| return result; |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Optional: remove modulo bias in random string generation.
Not security-critical here, but a small improvement using rejection sampling.
-function generateRandomString(length: number): string {
- const chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789';
- const bytes = randomBytes(length);
- let result = '';
-
- for (let i = 0; i < length; i++) {
- result += chars[bytes[i] % chars.length];
- }
-
- return result;
-}
+function generateRandomString(length: number): string {
+ const chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789';
+ const mask = 0xff - (0xff % chars.length);
+ let out = '';
+ while (out.length < length) {
+ const buf = randomBytes(length);
+ for (let i = 0; i < buf.length && out.length < length; i++) {
+ const v = buf[i];
+ if (v <= mask) out += chars[v % chars.length];
+ }
+ }
+ return out;
+}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| function generateRandomString(length: number): string { | |
| const chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'; | |
| const bytes = randomBytes(length); | |
| let result = ''; | |
| for (let i = 0; i < length; i++) { | |
| result += chars[bytes[i] % chars.length]; | |
| } | |
| return result; | |
| function generateRandomString(length: number): string { | |
| const chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'; | |
| const mask = 0xff - (0xff % chars.length); | |
| let out = ''; | |
| while (out.length < length) { | |
| const buf = randomBytes(length); | |
| for (let i = 0; i < buf.length && out.length < length; i++) { | |
| const v = buf[i]; | |
| if (v <= mask) out += chars[v % chars.length]; | |
| } | |
| } | |
| return out; | |
| } |
🤖 Prompt for AI Agents
In packages/tm-core/src/utils/id-generator.ts around lines 70 to 79, the current
generateRandomString uses bytes[i] % chars.length which introduces modulo bias;
replace that logic with rejection sampling: repeatedly read random bytes and
only accept a byte value if it is < (Math.floor(256 / chars.length) *
chars.length), then map it to an index via byte % chars.length; loop until you
have `length` characters (you can batch-read randomBytes and consume accepted
bytes), ensuring no modulo bias while keeping performance reasonable.
| // Callback on error | ||
| onError: () => { | ||
| if (authSpinner) { | ||
| authSpinner.fail('Authentication failed'); | ||
| } | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Surface actual error in onError callback
You’re ignoring the passed error. Log or pass it to ui to aid debugging.
- onError: () => {
+ onError: (err) => {
if (authSpinner) {
authSpinner.fail('Authentication failed');
}
+ if (process.env.DEBUG) {
+ console.error(chalk.gray((err as Error)?.stack ?? String(err)));
+ }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Callback on error | |
| onError: () => { | |
| if (authSpinner) { | |
| authSpinner.fail('Authentication failed'); | |
| } | |
| } | |
| // Callback on error | |
| onError: (err) => { | |
| if (authSpinner) { | |
| authSpinner.fail('Authentication failed'); | |
| } | |
| if (process.env.DEBUG) { | |
| console.error(chalk.gray((err as Error)?.stack ?? String(err))); | |
| } | |
| } |
🤖 Prompt for AI Agents
In apps/cli/src/commands/auth.command.ts around lines 400-405, the onError
callback currently ignores the error parameter; change its signature to accept
the error (e.g., onError: (err) => { ... }) and surface the actual error by
passing it to the UI/logger: if (authSpinner) authSpinner.fail(`Authentication
failed: ${err?.message ?? String(err)}`); and also call ui.error(err) if a UI
logger exists (fallback to console.error(err)) so the underlying error is
recorded; update any TypeScript types as needed to reflect the error parameter.
| // Validate status | ||
| if (options.status) { | ||
| const statuses = options.status.split(',').map((s: string) => s.trim()); | ||
|
|
||
| for (const status of statuses) { | ||
| if (status !== 'all' && !TASK_STATUSES.includes(status as TaskStatus)) { | ||
| console.error(chalk.red(`Invalid status: ${status}`)); | ||
| console.error( | ||
| chalk.gray(`Valid statuses: ${TASK_STATUSES.join(', ')}`) | ||
| ); | ||
| return false; | ||
| } | ||
| } | ||
| } | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Reject mixing 'all' with other statuses; normalize case.
Currently, "all,done" passes validation and later casts "all" to TaskStatus, which is invalid.
- const statuses = options.status.split(',').map((s: string) => s.trim());
+ const statuses = options.status
+ .split(',')
+ .map((s: string) => s.trim().toLowerCase());
+
+ if (statuses.includes('all') && statuses.length > 1) {
+ console.error(chalk.red("Invalid status: 'all' cannot be combined with other statuses"));
+ return false;
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Validate status | |
| if (options.status) { | |
| const statuses = options.status.split(',').map((s: string) => s.trim()); | |
| for (const status of statuses) { | |
| if (status !== 'all' && !TASK_STATUSES.includes(status as TaskStatus)) { | |
| console.error(chalk.red(`Invalid status: ${status}`)); | |
| console.error( | |
| chalk.gray(`Valid statuses: ${TASK_STATUSES.join(', ')}`) | |
| ); | |
| return false; | |
| } | |
| } | |
| } | |
| // Validate status | |
| if (options.status) { | |
| const statuses = options.status | |
| .split(',') | |
| .map((s: string) => s.trim().toLowerCase()); | |
| if (statuses.includes('all') && statuses.length > 1) { | |
| console.error( | |
| chalk.red("Invalid status: 'all' cannot be combined with other statuses") | |
| ); | |
| return false; | |
| } | |
| for (const status of statuses) { | |
| if (status !== 'all' && !TASK_STATUSES.includes(status as TaskStatus)) { | |
| console.error(chalk.red(`Invalid status: ${status}`)); | |
| console.error( | |
| chalk.gray(`Valid statuses: ${TASK_STATUSES.join(', ')}`) | |
| ); | |
| return false; | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
In apps/cli/src/commands/list.command.ts around lines 121 to 135, the status
validation currently allows mixing the special value "all" with other statuses
and performs case-sensitive checks; change the logic to first normalize each
status to lower-case, reject the combination where "all" appears with any other
status (print a clear error and return false), and then validate the remaining
statuses against TASK_STATUSES in a case-insensitive way (or normalize
TASK_STATUSES for comparison) so invalid statuses are correctly detected; ensure
downstream code never attempts to cast the literal "all" to TaskStatus.
| // Build filter | ||
| const filter = | ||
| options.status && options.status !== 'all' | ||
| ? { | ||
| status: options.status | ||
| .split(',') | ||
| .map((s: string) => s.trim() as TaskStatus) | ||
| } | ||
| : undefined; | ||
|
|
||
| // Call tm-core | ||
| const result = await this.tmCore.getTaskList({ | ||
| tag: options.tag, | ||
| filter, | ||
| includeSubtasks: options.withSubtasks | ||
| }); | ||
|
|
||
| return result as ListTasksResult; | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Build filter defensively; strip 'all' and normalize.
If validation is bypassed (programmatic use), "all" can leak into filter as a TaskStatus. Normalize and drop "all".
- const filter =
- options.status && options.status !== 'all'
- ? {
- status: options.status
- .split(',')
- .map((s: string) => s.trim() as TaskStatus)
- }
- : undefined;
+ let filter;
+ if (options.status) {
+ const statuses = options.status
+ .split(',')
+ .map((s: string) => s.trim().toLowerCase())
+ .filter((s: string) => s !== 'all') as TaskStatus[];
+ filter = statuses.length ? { status: statuses } : undefined;
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Build filter | |
| const filter = | |
| options.status && options.status !== 'all' | |
| ? { | |
| status: options.status | |
| .split(',') | |
| .map((s: string) => s.trim() as TaskStatus) | |
| } | |
| : undefined; | |
| // Call tm-core | |
| const result = await this.tmCore.getTaskList({ | |
| tag: options.tag, | |
| filter, | |
| includeSubtasks: options.withSubtasks | |
| }); | |
| return result as ListTasksResult; | |
| } | |
| // Build filter | |
| let filter: { status: TaskStatus[] } | undefined; | |
| if (options.status) { | |
| const statuses = options.status | |
| .split(',') | |
| .map((s: string) => s.trim().toLowerCase()) | |
| .filter((s: string) => s !== 'all') as TaskStatus[]; | |
| filter = statuses.length ? { status: statuses } : undefined; | |
| } | |
| // Call tm-core | |
| const result = await this.tmCore.getTaskList({ | |
| tag: options.tag, | |
| filter, | |
| includeSubtasks: options.withSubtasks | |
| }); | |
| return result as ListTasksResult; | |
| } |
🤖 Prompt for AI Agents
In apps/cli/src/commands/list.command.ts around lines 158–176, the filter
builder can leak the string "all" into the TaskStatus array when validation is
bypassed; update the logic to split the status string, trim and normalize items,
remove any entries equal to "all" (case-insensitive) and any empty strings, and
only set filter.status when the cleaned array has at least one element
(otherwise set filter to undefined) so you never pass "all" as a TaskStatus.
| private displayCompact(tasks: Task[], withSubtasks?: boolean): void { | ||
| tasks.forEach((task) => { | ||
| const icon = STATUS_ICONS[task.status]; | ||
| console.log(`${chalk.cyan(task.id)} ${icon} ${task.title}`); | ||
|
|
||
| if (withSubtasks && task.subtasks?.length) { | ||
| task.subtasks.forEach((subtask) => { | ||
| const subIcon = STATUS_ICONS[subtask.status]; | ||
| console.log( | ||
| ` ${chalk.gray(`${task.id}.${subtask.id}`)} ${subIcon} ${chalk.gray(subtask.title)}` | ||
| ); | ||
| }); | ||
| } | ||
| }); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Gracefully handle unknown statuses in compact view.
Avoid printing "undefined" when an unexpected status slips through.
- const icon = STATUS_ICONS[task.status];
+ const icon = STATUS_ICONS[task.status] ?? '·';
@@
- const subIcon = STATUS_ICONS[subtask.status];
+ const subIcon = STATUS_ICONS[subtask.status] ?? '·';📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| private displayCompact(tasks: Task[], withSubtasks?: boolean): void { | |
| tasks.forEach((task) => { | |
| const icon = STATUS_ICONS[task.status]; | |
| console.log(`${chalk.cyan(task.id)} ${icon} ${task.title}`); | |
| if (withSubtasks && task.subtasks?.length) { | |
| task.subtasks.forEach((subtask) => { | |
| const subIcon = STATUS_ICONS[subtask.status]; | |
| console.log( | |
| ` ${chalk.gray(`${task.id}.${subtask.id}`)} ${subIcon} ${chalk.gray(subtask.title)}` | |
| ); | |
| }); | |
| } | |
| }); | |
| } | |
| private displayCompact(tasks: Task[], withSubtasks?: boolean): void { | |
| tasks.forEach((task) => { | |
| const icon = STATUS_ICONS[task.status] ?? '·'; | |
| console.log(`${chalk.cyan(task.id)} ${icon} ${task.title}`); | |
| if (withSubtasks && task.subtasks?.length) { | |
| task.subtasks.forEach((subtask) => { | |
| const subIcon = STATUS_ICONS[subtask.status] ?? '·'; | |
| console.log( | |
| ` ${chalk.gray(`${task.id}.${subtask.id}`)} ${subIcon} ${chalk.gray(subtask.title)}` | |
| ); | |
| }); | |
| } | |
| }); | |
| } |
🤖 Prompt for AI Agents
In apps/cli/src/commands/list.command.ts around lines 227 to 241, the compact
display prints "undefined" if a task or subtask has an unexpected status; update
the code to use a safe fallback icon when STATUS_ICONS[status] is missing (e.g.,
a DEFAULT_STATUS_ICON or STATUS_ICONS['unknown']) for both task and subtask
icons, using a nullish/coalescing check so the fallback is used whenever the
lookup returns undefined.
| try { | ||
| // Use Supabase client to refresh the token | ||
| const response = await this.supabaseClient.refreshSession( | ||
| authData.refreshToken | ||
| ); | ||
|
|
||
| // Update authentication data | ||
| const newAuthData: AuthCredentials = { | ||
| ...authData, | ||
| token: response.token, | ||
| refreshToken: response.refreshToken, | ||
| expiresAt: response.expiresAt, | ||
| savedAt: new Date().toISOString() | ||
| }; | ||
|
|
||
| this.credentialStore.saveCredentials(newAuthData); | ||
| return newAuthData; | ||
| } catch (error) { | ||
| throw error; | ||
| } | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Remove no-op try/catch in refreshToken
The catch block rethrows unchanged; drop it to simplify.
- try {
- // Use Supabase client to refresh the token
- const response = await this.supabaseClient.refreshSession(
- authData.refreshToken
- );
-
- // Update authentication data
- const newAuthData: AuthCredentials = {
- ...authData,
- token: response.token,
- refreshToken: response.refreshToken,
- expiresAt: response.expiresAt,
- savedAt: new Date().toISOString()
- };
-
- this.credentialStore.saveCredentials(newAuthData);
- return newAuthData;
- } catch (error) {
- throw error;
- }
+ // Use Supabase client to refresh the token
+ const response = await this.supabaseClient.refreshSession(
+ authData.refreshToken
+ );
+ const newAuthData: AuthCredentials = {
+ ...authData,
+ token: response.token,
+ refreshToken: response.refreshToken,
+ expiresAt: response.expiresAt,
+ savedAt: new Date().toISOString()
+ };
+ this.credentialStore.saveCredentials(newAuthData);
+ return newAuthData;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try { | |
| // Use Supabase client to refresh the token | |
| const response = await this.supabaseClient.refreshSession( | |
| authData.refreshToken | |
| ); | |
| // Update authentication data | |
| const newAuthData: AuthCredentials = { | |
| ...authData, | |
| token: response.token, | |
| refreshToken: response.refreshToken, | |
| expiresAt: response.expiresAt, | |
| savedAt: new Date().toISOString() | |
| }; | |
| this.credentialStore.saveCredentials(newAuthData); | |
| return newAuthData; | |
| } catch (error) { | |
| throw error; | |
| } | |
| } | |
| // Use Supabase client to refresh the token | |
| const response = await this.supabaseClient.refreshSession( | |
| authData.refreshToken | |
| ); | |
| const newAuthData: AuthCredentials = { | |
| ...authData, | |
| token: response.token, | |
| refreshToken: response.refreshToken, | |
| expiresAt: response.expiresAt, | |
| savedAt: new Date().toISOString() | |
| }; | |
| this.credentialStore.saveCredentials(newAuthData); | |
| return newAuthData; |
🤖 Prompt for AI Agents
In packages/tm-core/src/auth/auth-manager.ts around lines 92 to 112, the
try/catch in the refreshToken flow is a no-op because the catch simply rethrows
the error; remove the surrounding try/catch and let the async function propagate
errors naturally, keeping the token refresh, newAuthData creation,
credentialStore.saveCredentials call, and return as-is.
| // Use file locking to prevent concurrent writes | ||
| const lockKey = filePath; | ||
| const existingLock = this.fileLocks.get(lockKey); | ||
|
|
||
| if (existingLock) { | ||
| await existingLock; | ||
| } | ||
|
|
||
| const lockPromise = this.performAtomicWrite(filePath, data); | ||
| this.fileLocks.set(lockKey, lockPromise); | ||
|
|
||
| try { | ||
| await lockPromise; | ||
| } finally { | ||
| this.fileLocks.delete(lockKey); | ||
| } | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Locking is in-process only (FYI).
The current per-file Promise lock does not protect against concurrent writes from other Node processes. If cross-process writes are possible, consider a well-known library (e.g., proper file locks) or a journal/append strategy.
Would cross-process writes occur in your storage usage? If yes, I can propose a portable lock strategy.
🤖 Prompt for AI Agents
In packages/tm-core/src/storage/file-storage/file-operations.ts around lines
39–55, the current per-file Promise-based locking only prevents concurrent
writes inside the same Node process and will not protect against other Node
processes; if your storage can be written by multiple processes replace or
augment the in-memory Promise lock with an OS-level cross-process lock (e.g.,
use a portable locking library such as proper-lockfile or fs-ext flock) or
switch to a journal/append-and-rename strategy that provides atomicity across
processes; update the code to acquire the cross-process lock before calling
performAtomicWrite and release it in the finally block (you can keep the
in-process map as an optimization but do not rely on it for safety).
| detectFormat(data: any): FileFormat { | ||
| if (!data || typeof data !== 'object') { | ||
| return 'standard'; | ||
| } | ||
|
|
||
| const keys = Object.keys(data); | ||
|
|
||
| // Check if this uses the legacy format with tag keys | ||
| // Legacy format has keys that are not 'tasks' or 'metadata' | ||
| const hasLegacyFormat = keys.some( | ||
| (key) => key !== 'tasks' && key !== 'metadata' | ||
| ); | ||
|
|
||
| return hasLegacyFormat ? 'legacy' : 'standard'; | ||
| } |
There was a problem hiding this comment.
Format detection misclassifies standard files with extra keys
Current logic treats any extra top-level key as legacy, which risks data corruption during saves. Detect standard when tasks/metadata exist; only treat legacy when tag-envelopes are present.
- detectFormat(data: any): FileFormat {
- if (!data || typeof data !== 'object') {
- return 'standard';
- }
-
- const keys = Object.keys(data);
-
- // Check if this uses the legacy format with tag keys
- // Legacy format has keys that are not 'tasks' or 'metadata'
- const hasLegacyFormat = keys.some(
- (key) => key !== 'tasks' && key !== 'metadata'
- );
-
- return hasLegacyFormat ? 'legacy' : 'standard';
- }
+ detectFormat(data: unknown): FileFormat {
+ if (!data || typeof data !== 'object') return 'standard';
+ const obj = data as Record<string, any>;
+ const hasTasks = Object.prototype.hasOwnProperty.call(obj, 'tasks');
+ const hasMetadata = Object.prototype.hasOwnProperty.call(obj, 'metadata');
+ if (hasTasks || hasMetadata) return 'standard';
+ // Legacy: top-level keys whose values look like tag envelopes
+ const values = Object.values(obj);
+ const looksLegacy = values.some(
+ (v) => v && typeof v === 'object' && ('tasks' in v || 'metadata' in v)
+ );
+ return looksLegacy ? 'legacy' : 'standard';
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| detectFormat(data: any): FileFormat { | |
| if (!data || typeof data !== 'object') { | |
| return 'standard'; | |
| } | |
| const keys = Object.keys(data); | |
| // Check if this uses the legacy format with tag keys | |
| // Legacy format has keys that are not 'tasks' or 'metadata' | |
| const hasLegacyFormat = keys.some( | |
| (key) => key !== 'tasks' && key !== 'metadata' | |
| ); | |
| return hasLegacyFormat ? 'legacy' : 'standard'; | |
| } | |
| detectFormat(data: unknown): FileFormat { | |
| if (!data || typeof data !== 'object') return 'standard'; | |
| const obj = data as Record<string, any>; | |
| const hasTasks = Object.prototype.hasOwnProperty.call(obj, 'tasks'); | |
| const hasMetadata = Object.prototype.hasOwnProperty.call(obj, 'metadata'); | |
| if (hasTasks || hasMetadata) return 'standard'; | |
| // Legacy: top-level keys whose values look like tag envelopes | |
| const values = Object.values(obj); | |
| const looksLegacy = values.some( | |
| (v) => v && typeof v === 'object' && ('tasks' in v || 'metadata' in v) | |
| ); | |
| return looksLegacy ? 'legacy' : 'standard'; | |
| } |
🤖 Prompt for AI Agents
In packages/tm-core/src/storage/file-storage/format-handler.ts around lines 21
to 35, the detector currently marks any extra top-level key as 'legacy' which
misclassifies valid standard files; change the logic to first treat a file as
'standard' if it contains a 'tasks' or 'metadata' key, and only consider
'legacy' when the top-level keys look like tag-envelope format (e.g., no
'tasks'/'metadata' and most/all keys map to arrays or objects consistent with
tag lists); otherwise default to 'standard'. Ensure the check inspects values
(arrays/objects) to confirm tag-envelope shape instead of relying solely on
unknown key names.
| extractTags(data: any): string[] { | ||
| if (!data) { | ||
| return []; | ||
| } | ||
|
|
||
| const format = this.detectFormat(data); | ||
|
|
||
| if (format === 'legacy') { | ||
| // Return all tag keys from legacy format | ||
| const keys = Object.keys(data); | ||
| return keys.filter((key) => key !== 'tasks' && key !== 'metadata'); | ||
| } | ||
|
|
||
| // Standard format - just has 'master' tag | ||
| return ['master']; | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Standard tag extraction should respect metadata.tags
If present, return metadata.tags; otherwise fall back to ['master'].
- // Standard format - just has 'master' tag
- return ['master'];
+ // Standard format - prefer explicit tags if provided
+ const tags = data?.metadata?.tags;
+ return Array.isArray(tags) && tags.length > 0 ? [...new Set(tags)] : ['master'];📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| extractTags(data: any): string[] { | |
| if (!data) { | |
| return []; | |
| } | |
| const format = this.detectFormat(data); | |
| if (format === 'legacy') { | |
| // Return all tag keys from legacy format | |
| const keys = Object.keys(data); | |
| return keys.filter((key) => key !== 'tasks' && key !== 'metadata'); | |
| } | |
| // Standard format - just has 'master' tag | |
| return ['master']; | |
| } | |
| extractTags(data: any): string[] { | |
| if (!data) { | |
| return []; | |
| } | |
| const format = this.detectFormat(data); | |
| if (format === 'legacy') { | |
| // Return all tag keys from legacy format | |
| const keys = Object.keys(data); | |
| return keys.filter((key) => key !== 'tasks' && key !== 'metadata'); | |
| } | |
| // Standard format - prefer explicit tags if provided | |
| const tags = data?.metadata?.tags; | |
| return Array.isArray(tags) && tags.length > 0 | |
| ? [...new Set(tags)] | |
| : ['master']; | |
| } |
🤖 Prompt for AI Agents
In packages/tm-core/src/storage/file-storage/format-handler.ts around lines 143
to 158, extractTags currently ignores standard-format metadata.tags; update it
so when format !== 'legacy' it checks data.metadata?.tags and returns that array
if present (and an array), otherwise fall back to ['master']; ensure you handle
missing metadata or non-array tags by returning ['master'] as the safe default.
| convertToSaveFormat( | ||
| tasks: Task[], | ||
| metadata: TaskMetadata, | ||
| existingData: any, | ||
| tag: string | ||
| ): any { | ||
| const resolvedTag = tag || 'master'; | ||
|
|
||
| // Normalize task IDs to strings | ||
| const normalizedTasks = this.normalizeTasks(tasks); | ||
|
|
||
| // Check if existing file uses legacy format | ||
| if (existingData && this.detectFormat(existingData) === 'legacy') { | ||
| return this.convertToLegacyFormat(normalizedTasks, metadata, resolvedTag); | ||
| } | ||
|
|
||
| // Use standard format for new files | ||
| return this.convertToStandardFormat(normalizedTasks, metadata, tag); | ||
| } |
There was a problem hiding this comment.
Legacy merge and tag handling bug (data loss + wrong tag)
- Passing tag instead of resolvedTag drops the tag from metadata.
- Returning only the current tag in legacy mode discards other tags’ data.
convertToSaveFormat(
@@
- const resolvedTag = tag || 'master';
+ const resolvedTag = tag || 'master';
@@
- // Check if existing file uses legacy format
- if (existingData && this.detectFormat(existingData) === 'legacy') {
- return this.convertToLegacyFormat(normalizedTasks, metadata, resolvedTag);
- }
+ // If existing file uses legacy format, merge into it (preserve other tags)
+ if (existingData && this.detectFormat(existingData) === 'legacy') {
+ const updated = this.convertToLegacyFormat(
+ normalizedTasks,
+ metadata,
+ resolvedTag
+ );
+ return { ...(existingData ?? {}), ...updated };
+ }
@@
- // Use standard format for new files
- return this.convertToStandardFormat(normalizedTasks, metadata, tag);
+ // Use standard format (merge metadata if existing)
+ const mergedMeta =
+ existingData && existingData.metadata
+ ? { ...existingData.metadata, ...metadata }
+ : metadata;
+ return this.convertToStandardFormat(normalizedTasks, mergedMeta, resolvedTag);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| convertToSaveFormat( | |
| tasks: Task[], | |
| metadata: TaskMetadata, | |
| existingData: any, | |
| tag: string | |
| ): any { | |
| const resolvedTag = tag || 'master'; | |
| // Normalize task IDs to strings | |
| const normalizedTasks = this.normalizeTasks(tasks); | |
| // Check if existing file uses legacy format | |
| if (existingData && this.detectFormat(existingData) === 'legacy') { | |
| return this.convertToLegacyFormat(normalizedTasks, metadata, resolvedTag); | |
| } | |
| // Use standard format for new files | |
| return this.convertToStandardFormat(normalizedTasks, metadata, tag); | |
| } | |
| convertToSaveFormat( | |
| tasks: Task[], | |
| metadata: TaskMetadata, | |
| existingData: any, | |
| tag: string | |
| ): any { | |
| const resolvedTag = tag || 'master'; | |
| // Normalize task IDs to strings | |
| const normalizedTasks = this.normalizeTasks(tasks); | |
| // If existing file uses legacy format, merge into it (preserve other tags) | |
| if (existingData && this.detectFormat(existingData) === 'legacy') { | |
| const updated = this.convertToLegacyFormat( | |
| normalizedTasks, | |
| metadata, | |
| resolvedTag | |
| ); | |
| return { ...(existingData ?? {}), ...updated }; | |
| } | |
| // Use standard format (merge metadata if existing) | |
| const mergedMeta = | |
| existingData && existingData.metadata | |
| ? { ...existingData.metadata, ...metadata } | |
| : metadata; | |
| return this.convertToStandardFormat(normalizedTasks, mergedMeta, resolvedTag); | |
| } |
🤖 Prompt for AI Agents
In packages/tm-core/src/storage/file-storage/format-handler.ts around lines 163
to 181, the function uses the un-normalized tag and, in legacy mode, replaces
existing tag data instead of merging, causing tag loss and data loss; fix by
passing resolvedTag (not raw tag) into convertToStandardFormat, and when
detectFormat(existingData) === 'legacy' call convertToLegacyFormat with
existingData plus normalizedTasks and metadata so that the implementation merges
the current tag payload into the existingData.tags map (preserving other tags)
rather than returning only the current tag; ensure convertToLegacyFormat
signature/logic is updated to accept existingData and perform a safe merge of
tags.
| private normalizeTasks(tasks: Task[]): Task[] { | ||
| return tasks.map((task) => ({ | ||
| ...task, | ||
| id: String(task.id), // Task IDs are strings | ||
| dependencies: task.dependencies?.map((dep) => String(dep)) || [], | ||
| subtasks: | ||
| task.subtasks?.map((subtask) => ({ | ||
| ...subtask, | ||
| id: Number(subtask.id), // Subtask IDs are numbers | ||
| parentId: String(subtask.parentId) // Parent ID is string (Task ID) | ||
| })) || [] | ||
| })); | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Harden normalization against invalid subtask IDs
Blind Number() coercion can produce NaN. Validate and fail fast to avoid corrupting persisted data.
private normalizeTasks(tasks: Task[]): Task[] {
return tasks.map((task) => ({
...task,
id: String(task.id), // Task IDs are strings
dependencies: task.dependencies?.map((dep) => String(dep)) || [],
subtasks:
task.subtasks?.map((subtask) => ({
...subtask,
- id: Number(subtask.id), // Subtask IDs are numbers
+ id: (() => {
+ const n = Number(subtask.id);
+ if (!Number.isFinite(n)) {
+ throw new Error(`Invalid subtask id: ${String(subtask.id)}`);
+ }
+ return n;
+ })(), // Subtask IDs are numbers
parentId: String(subtask.parentId) // Parent ID is string (Task ID)
})) || []
}));
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| private normalizeTasks(tasks: Task[]): Task[] { | |
| return tasks.map((task) => ({ | |
| ...task, | |
| id: String(task.id), // Task IDs are strings | |
| dependencies: task.dependencies?.map((dep) => String(dep)) || [], | |
| subtasks: | |
| task.subtasks?.map((subtask) => ({ | |
| ...subtask, | |
| id: Number(subtask.id), // Subtask IDs are numbers | |
| parentId: String(subtask.parentId) // Parent ID is string (Task ID) | |
| })) || [] | |
| })); | |
| } | |
| private normalizeTasks(tasks: Task[]): Task[] { | |
| return tasks.map((task) => ({ | |
| ...task, | |
| id: String(task.id), // Task IDs are strings | |
| dependencies: task.dependencies?.map((dep) => String(dep)) || [], | |
| subtasks: | |
| task.subtasks?.map((subtask) => ({ | |
| ...subtask, | |
| id: (() => { | |
| const n = Number(subtask.id); | |
| if (!Number.isFinite(n)) { | |
| throw new Error(`Invalid subtask id: ${String(subtask.id)}`); | |
| } | |
| return n; | |
| })(), // Subtask IDs are numbers | |
| parentId: String(subtask.parentId) // Parent ID is string (Task ID) | |
| })) || [] | |
| })); | |
| } |
🤖 Prompt for AI Agents
In packages/tm-core/src/storage/file-storage/format-handler.ts around lines 222
to 234, the normalizeTasks function blindly coerces subtask.id with Number(...)
which can produce NaN and corrupt persisted data; update the normalization to
validate subtask.id after coercion and throw a clear error if it is NaN (fail
fast), ensure parentId is coerced to string and non-empty, and preserve original
types otherwise; the fix should check Number(subtask.id) for finite numeric
value, throw an Error with context including task.id and subtask index/id when
invalid, and only return normalized tasks when all IDs pass validation.
| const statusConfig = { | ||
| done: { | ||
| color: chalk.green, | ||
| icon: String.fromCharCode(8730), | ||
| tableIcon: String.fromCharCode(8730) | ||
| }, // √ | ||
| pending: { color: chalk.yellow, icon: 'o', tableIcon: 'o' }, | ||
| 'in-progress': { | ||
| color: chalk.hex('#FFA500'), | ||
| icon: String.fromCharCode(9654), | ||
| tableIcon: '>' | ||
| }, // ▶ | ||
| deferred: { color: chalk.gray, icon: 'x', tableIcon: 'x' }, | ||
| blocked: { color: chalk.red, icon: '!', tableIcon: '!' }, | ||
| review: { color: chalk.magenta, icon: '?', tableIcon: '?' }, | ||
| cancelled: { color: chalk.gray, icon: 'X', tableIcon: 'X' } | ||
| }; | ||
|
|
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Align icons/colors with existing scripts/modules/ui.js and fix glyphs.
Current mapping uses √ (U+221A) and ASCII 'o'/'>' which diverge from the established ✓/○/► style and looks inconsistent. Also add typing to avoid implicit any on statusConfig indexing.
-export function getStatusWithColor(
+export function getStatusWithColor(
status: TaskStatus,
forTable: boolean = false
): string {
- const statusConfig = {
- done: {
- color: chalk.green,
- icon: String.fromCharCode(8730),
- tableIcon: String.fromCharCode(8730)
- }, // √
- pending: { color: chalk.yellow, icon: 'o', tableIcon: 'o' },
+ const statusConfig: Partial<Record<TaskStatus, { color: (s: string) => string; icon: string; tableIcon: string }>> = {
+ done: {
+ color: chalk.green,
+ icon: '✓',
+ tableIcon: '✓'
+ },
+ pending: { color: chalk.yellow, icon: '○', tableIcon: '○' },
'in-progress': {
color: chalk.hex('#FFA500'),
- icon: String.fromCharCode(9654),
- tableIcon: '>'
- }, // ▶
+ icon: '▶',
+ tableIcon: '►'
+ },
deferred: { color: chalk.gray, icon: 'x', tableIcon: 'x' },
blocked: { color: chalk.red, icon: '!', tableIcon: '!' },
review: { color: chalk.magenta, icon: '?', tableIcon: '?' },
cancelled: { color: chalk.gray, icon: 'X', tableIcon: 'X' }
};
@@
- const simpleIcons = {
- done: String.fromCharCode(8730), // √
- pending: 'o',
- 'in-progress': '>',
+ const simpleIcons: Partial<Record<TaskStatus, string>> = {
+ done: '✓',
+ pending: '○',
+ 'in-progress': '►',
deferred: 'x',
blocked: '!',
review: '?',
cancelled: 'X'
};Compare output vs legacy helper:
Also applies to: 42-55
🏁 Script executed:
rg -n "statusConfig" scripts/modules/ui.jsLength of output: 167
🏁 Script executed:
#!/bin/bash
# Extract the statusConfig mapping from scripts/modules/ui.js for comparison
sed -n '340,380p' scripts/modules/ui.jsLength of output: 1322
🏁 Script executed:
#!/bin/bash
rg -n "TaskStatus" -g "apps/cli/src/utils/ui.ts"Length of output: 139
Align icons/colors in getStatusWithColor with scripts/modules/ui.js and add typing
In apps/cli/src/utils/ui.ts’s getStatusWithColor:
- Annotate statusConfig as
Partial<Record<TaskStatus, { color: (s: string) => string; icon: string; tableIcon: string }>> - Replace
•√(U+221A) →✓
•o→○
•String.fromCharCode(9654)/'>'→'▶'/'►' - Define
const simpleIcons: Partial<Record<TaskStatus, string>>and map
done: '✓', pending: '○', 'in-progress': '►', deferred: 'x', blocked: '!', review: '?', cancelled: 'X'
🤖 Prompt for AI Agents
In apps/cli/src/utils/ui.ts around lines 18 to 35, statusConfig is untyped and
some icons differ from scripts/modules/ui.js; update statusConfig to be typed as
Partial<Record<TaskStatus, { color: (s: string) => string; icon: string;
tableIcon: string }>>, replace the icons so done uses '✓' (U+2713) not '√',
pending uses '○' not 'o', and in-progress uses '►'/'▶' instead of
String.fromCharCode(9654)/'>', then add a new const simpleIcons:
Partial<Record<TaskStatus, string>> mapping done:'✓', pending:'○',
'in-progress':'►', deferred:'x', blocked:'!', review:'?', cancelled:'X' and use
that mapping where simple icons are needed so icons/colors align with
scripts/modules/ui.js.
| export function createProgressBar( | ||
| completed: number, | ||
| total: number, | ||
| width: number = 30 | ||
| ): string { | ||
| if (total === 0) { | ||
| return chalk.gray('No tasks'); | ||
| } | ||
|
|
||
| const percentage = Math.round((completed / total) * 100); | ||
| const filled = Math.round((completed / total) * width); | ||
| const empty = width - filled; | ||
|
|
||
| const bar = chalk.green('█').repeat(filled) + chalk.gray('░').repeat(empty); | ||
|
|
||
| return `${bar} ${chalk.cyan(`${percentage}%`)} (${completed}/${total})`; | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Clamp progress values and guard division.
Avoid NaN/overflows for completed > total or negative inputs.
export function createProgressBar(
completed: number,
total: number,
width: number = 30
): string {
- if (total === 0) {
+ if (total <= 0) {
return chalk.gray('No tasks');
}
- const percentage = Math.round((completed / total) * 100);
- const filled = Math.round((completed / total) * width);
+ const done = Math.max(0, Math.min(completed, total));
+ const ratio = done / total;
+ const percentage = Math.round(ratio * 100);
+ const filled = Math.round(ratio * width);
const empty = width - filled;
const bar = chalk.green('█').repeat(filled) + chalk.gray('░').repeat(empty);
- return `${bar} ${chalk.cyan(`${percentage}%`)} (${completed}/${total})`;
+ return `${bar} ${chalk.cyan(`${percentage}%`)} (${done}/${total})`;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export function createProgressBar( | |
| completed: number, | |
| total: number, | |
| width: number = 30 | |
| ): string { | |
| if (total === 0) { | |
| return chalk.gray('No tasks'); | |
| } | |
| const percentage = Math.round((completed / total) * 100); | |
| const filled = Math.round((completed / total) * width); | |
| const empty = width - filled; | |
| const bar = chalk.green('█').repeat(filled) + chalk.gray('░').repeat(empty); | |
| return `${bar} ${chalk.cyan(`${percentage}%`)} (${completed}/${total})`; | |
| } | |
| export function createProgressBar( | |
| completed: number, | |
| total: number, | |
| width: number = 30 | |
| ): string { | |
| // Guard against zero or negative totals | |
| if (total <= 0) { | |
| return chalk.gray('No tasks'); | |
| } | |
| // Clamp completed between 0 and total, then compute the ratio | |
| const done = Math.max(0, Math.min(completed, total)); | |
| const ratio = done / total; | |
| const percentage = Math.round(ratio * 100); | |
| const filled = Math.round(ratio * width); | |
| const empty = width - filled; | |
| const bar = chalk.green('█').repeat(filled) + chalk.gray('░').repeat(empty); | |
| // Show the clamped count rather than raw `completed` | |
| return `${bar} ${chalk.cyan(`${percentage}%`)} (${done}/${total})`; | |
| } |
🤖 Prompt for AI Agents
In apps/cli/src/utils/ui.ts around lines 105 to 121, the progress calculations
don't guard against total <= 0 or completed values outside [0, total], which can
produce NaN or overflow; clamp total and completed using Math.max/Math.min
(treat total <= 0 as zero and return the existing 'No tasks' string), set
completed = Math.max(0, Math.min(completed, total)), then compute percentage =
Math.round((completed / total) * 100) only after ensuring total > 0, compute
filled = Math.round((completed / total) * width) and ensure filled is between 0
and width (e.g., Math.max(0, Math.min(filled, width))) before rendering the bar
so negative or over-complete inputs cannot produce invalid output.
| export function formatDependenciesWithStatus( | ||
| dependencies: string[] | number[], | ||
| tasks: Task[] | ||
| ): string { | ||
| if (!dependencies || dependencies.length === 0) { | ||
| return chalk.gray('none'); | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Accept undefined dependencies to match Task shape.
Task.dependencies is often optional; current signature forces callers to cast.
-export function formatDependenciesWithStatus(
- dependencies: string[] | number[],
+export function formatDependenciesWithStatus(
+ dependencies: Array<string | number> | undefined,
tasks: Task[]
): string {
if (!dependencies || dependencies.length === 0) {
return chalk.gray('none');
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export function formatDependenciesWithStatus( | |
| dependencies: string[] | number[], | |
| tasks: Task[] | |
| ): string { | |
| if (!dependencies || dependencies.length === 0) { | |
| return chalk.gray('none'); | |
| } | |
| export function formatDependenciesWithStatus( | |
| dependencies: Array<string | number> | undefined, | |
| tasks: Task[] | |
| ): string { | |
| if (!dependencies || dependencies.length === 0) { | |
| return chalk.gray('none'); | |
| } | |
| // …rest of implementation… | |
| } |
🤖 Prompt for AI Agents
In apps/cli/src/utils/ui.ts around lines 201 to 207, the function signature for
formatDependenciesWithStatus requires a non-optional dependencies parameter
which forces callers to cast when Task.dependencies is optional; change the
signature to accept undefined (e.g., dependencies?: string[] | number[] or
dependencies: string[] | number[] | undefined) and ensure the existing guard (if
(!dependencies || dependencies.length === 0)) still correctly handles the
undefined case so callers can pass Task.dependencies directly without casting.
| const row: string[] = [ | ||
| chalk.cyan(task.id.toString()), | ||
| truncate(task.title, colWidths[1] - 3), | ||
| getStatusWithColor(task.status, true), // Use table version | ||
| getPriorityWithColor(task.priority) | ||
| ]; | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Prevent negative width in truncate; guard on very narrow terminals.
- const row: string[] = [
+ const safeTitleWidth = Math.max(8, (colWidths[1] ?? 20) - 3);
+ const row: string[] = [
chalk.cyan(task.id.toString()),
- truncate(task.title, colWidths[1] - 3),
+ truncate(task.title, safeTitleWidth),
getStatusWithColor(task.status, true), // Use table version
getPriorityWithColor(task.priority)
];📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const row: string[] = [ | |
| chalk.cyan(task.id.toString()), | |
| truncate(task.title, colWidths[1] - 3), | |
| getStatusWithColor(task.status, true), // Use table version | |
| getPriorityWithColor(task.priority) | |
| ]; | |
| // Ensure we never pass a negative or too-small width to truncate | |
| const safeTitleWidth = Math.max(8, (colWidths[1] ?? 20) - 3); | |
| const row: string[] = [ | |
| chalk.cyan(task.id.toString()), | |
| truncate(task.title, safeTitleWidth), | |
| getStatusWithColor(task.status, true), // Use table version | |
| getPriorityWithColor(task.priority) | |
| ]; |
🤖 Prompt for AI Agents
In apps/cli/src/utils/ui.ts around lines 279 to 285, the truncate call can
receive a negative width on very narrow terminals; compute a safeWidth =
Math.max(0, colWidths[1] - 3) before truncation and use truncate(task.title,
safeWidth). If safeWidth is 0 (or otherwise too small to meaningfully truncate),
fallback to an empty string or a clipped slice (e.g., task.title.slice(0,
safeWidth)) to avoid passing negative values into truncate.
| // Normalize subtask IDs to strings | ||
| this.subtasks = (data.subtasks || []).map((subtask) => ({ | ||
| ...subtask, | ||
| id: Number(subtask.id), // Keep subtask IDs as numbers per interface | ||
| parentId: String(subtask.parentId) | ||
| })); |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Comment contradicts code
IDs are normalized to numbers (not strings). Update comment for clarity.
Apply:
- // Normalize subtask IDs to strings
+ // Normalize subtask IDs to numbers and parentId to string📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Normalize subtask IDs to strings | |
| this.subtasks = (data.subtasks || []).map((subtask) => ({ | |
| ...subtask, | |
| id: Number(subtask.id), // Keep subtask IDs as numbers per interface | |
| parentId: String(subtask.parentId) | |
| })); | |
| // Normalize subtask IDs to numbers and parentId to string | |
| this.subtasks = (data.subtasks || []).map((subtask) => ({ | |
| ...subtask, | |
| id: Number(subtask.id), // Keep subtask IDs as numbers per interface | |
| parentId: String(subtask.parentId) | |
| })); |
🤖 Prompt for AI Agents
In packages/tm-core/src/entities/task.entity.ts around lines 50 to 55, the
comment states "Normalize subtask IDs to strings" but the code converts
subtask.id to Number — update the comment to accurately reflect the behavior
(e.g., "Normalize subtask IDs to numbers" or "Normalize subtask IDs to numbers
and parentId to string"), so the comment matches the code's normalization of id
to a number and parentId to a string.
| return new ApiStorage({ | ||
| endpoint: apiEndpoint, | ||
| accessToken: apiAccessToken, | ||
| projectId: config.projectPath, | ||
| timeout: config.retry?.requestTimeout, | ||
| enableRetry: config.retry?.retryOnNetworkError, | ||
| maxRetries: config.retry?.retryAttempts | ||
| }); |
There was a problem hiding this comment.
Don’t leak filesystem paths as projectId; derive a safe slug instead
Passing config.projectPath as projectId risks exposing local paths and invalid characters on the API side.
Apply:
return new ApiStorage({
endpoint: apiEndpoint,
accessToken: apiAccessToken,
- projectId: config.projectPath,
+ projectId: StorageFactory.deriveProjectId(config.projectPath),
timeout: config.retry?.requestTimeout,
enableRetry: config.retry?.retryOnNetworkError,
maxRetries: config.retry?.retryAttempts
});Add helper:
@@
export class StorageFactory {
@@
static isHamsterAvailable(config: Partial<IConfiguration>): boolean {
return !!(config.storage?.apiEndpoint && config.storage?.apiAccessToken);
}
+
+ /** Derive a stable, API-safe project identifier from a path */
+ private static deriveProjectId(projectPath?: string): string {
+ if (!projectPath) return 'default';
+ try {
+ const base = projectPath.split(/[\\/]/).pop() || 'default';
+ // Keep URL/slug-safe chars, cap length
+ return base.replace(/[^a-zA-Z0-9-_]/g, '-').slice(0, 64);
+ } catch {
+ return 'default';
+ }
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| return new ApiStorage({ | |
| endpoint: apiEndpoint, | |
| accessToken: apiAccessToken, | |
| projectId: config.projectPath, | |
| timeout: config.retry?.requestTimeout, | |
| enableRetry: config.retry?.retryOnNetworkError, | |
| maxRetries: config.retry?.retryAttempts | |
| }); | |
| // Inside StorageFactory, replace the existing ApiStorage instantiation: | |
| return new ApiStorage({ | |
| endpoint: apiEndpoint, | |
| accessToken: apiAccessToken, | |
| projectId: StorageFactory.deriveProjectId(config.projectPath), | |
| timeout: config.retry?.requestTimeout, | |
| enableRetry: config.retry?.retryOnNetworkError, | |
| maxRetries: config.retry?.retryAttempts | |
| }); |
| return new ApiStorage({ | |
| endpoint: apiEndpoint, | |
| accessToken: apiAccessToken, | |
| projectId: config.projectPath, | |
| timeout: config.retry?.requestTimeout, | |
| enableRetry: config.retry?.retryOnNetworkError, | |
| maxRetries: config.retry?.retryAttempts | |
| }); | |
| // Add this helper to StorageFactory, e.g. right after isHamsterAvailable: | |
| /** Derive a stable, API-safe project identifier from a path */ | |
| private static deriveProjectId(projectPath?: string): string { | |
| if (!projectPath) return 'default'; | |
| try { | |
| const base = projectPath.split(/[\\/]/).pop() || 'default'; | |
| // Keep URL/slug-safe chars, cap length | |
| return base.replace(/[^a-zA-Z0-9-_]/g, '-').slice(0, 64); | |
| } catch { | |
| return 'default'; | |
| } | |
| } |
🤖 Prompt for AI Agents
In packages/tm-core/src/storage/storage-factory.ts around lines 169 to 176,
don’t pass config.projectPath directly as projectId because it may leak
filesystem paths and contain invalid characters; instead add a small helper
(e.g., deriveProjectIdFromPath) that takes the projectPath, extracts a base
name, lowercases it, replaces or removes filesystem-unfriendly characters
(spaces, path separators, backslashes, colons) with safe characters (dash or
underscore), collapses repeated non-alphanumerics, and truncates to a reasonable
length (e.g., 64 chars), then use that sanitized slug as projectId when
constructing ApiStorage; implement the helper in this file or a shared util and
call it here so ApiStorage receives a safe, non-path projectId.
| static validateStorageConfig(config: Partial<IConfiguration>): { | ||
| isValid: boolean; | ||
| errors: string[]; | ||
| } { | ||
| const errors: string[] = []; | ||
| const storageType = config.storage?.type; | ||
|
|
||
| if (!storageType) { | ||
| errors.push('Storage type is not specified'); | ||
| return { isValid: false, errors }; | ||
| } | ||
|
|
||
| switch (storageType) { | ||
| case 'api': | ||
| if (!config.storage?.apiEndpoint) { | ||
| errors.push('API endpoint is required for API storage'); | ||
| } | ||
| if (!config.storage?.apiAccessToken) { | ||
| errors.push('API access token is required for API storage'); | ||
| } | ||
| break; | ||
|
|
||
| case 'file': | ||
| // File storage doesn't require additional config | ||
| break; | ||
|
|
||
| case 'auto': | ||
| // Auto storage is valid - it will determine the actual type at runtime | ||
| // No specific validation needed as it will fall back to file if API not configured | ||
| break; | ||
|
|
||
| default: | ||
| errors.push(`Unknown storage type: ${storageType}`); | ||
| } | ||
|
|
||
| return { | ||
| isValid: errors.length === 0, | ||
| errors | ||
| }; | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Validation: include unknown 'auto' default guidance
When type is missing, the factory defaults to 'auto'. Validation should reflect that to reduce false negatives in tooling.
- if (!storageType) {
- errors.push('Storage type is not specified');
- return { isValid: false, errors };
- }
+ if (!storageType) {
+ // Missing is acceptable; factory treats it as 'auto'
+ return { isValid: true, errors };
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| static validateStorageConfig(config: Partial<IConfiguration>): { | |
| isValid: boolean; | |
| errors: string[]; | |
| } { | |
| const errors: string[] = []; | |
| const storageType = config.storage?.type; | |
| if (!storageType) { | |
| errors.push('Storage type is not specified'); | |
| return { isValid: false, errors }; | |
| } | |
| switch (storageType) { | |
| case 'api': | |
| if (!config.storage?.apiEndpoint) { | |
| errors.push('API endpoint is required for API storage'); | |
| } | |
| if (!config.storage?.apiAccessToken) { | |
| errors.push('API access token is required for API storage'); | |
| } | |
| break; | |
| case 'file': | |
| // File storage doesn't require additional config | |
| break; | |
| case 'auto': | |
| // Auto storage is valid - it will determine the actual type at runtime | |
| // No specific validation needed as it will fall back to file if API not configured | |
| break; | |
| default: | |
| errors.push(`Unknown storage type: ${storageType}`); | |
| } | |
| return { | |
| isValid: errors.length === 0, | |
| errors | |
| }; | |
| } | |
| static validateStorageConfig(config: Partial<IConfiguration>): { | |
| isValid: boolean; | |
| errors: string[]; | |
| } { | |
| const errors: string[] = []; | |
| const storageType = config.storage?.type; | |
| if (!storageType) { | |
| // Missing is acceptable; factory treats it as 'auto' | |
| return { isValid: true, errors }; | |
| } | |
| switch (storageType) { | |
| case 'api': | |
| if (!config.storage?.apiEndpoint) { | |
| errors.push('API endpoint is required for API storage'); | |
| } | |
| if (!config.storage?.apiAccessToken) { | |
| errors.push('API access token is required for API storage'); | |
| } | |
| break; | |
| case 'file': | |
| // File storage doesn't require additional config | |
| break; | |
| case 'auto': | |
| // Auto storage is valid - it will determine the actual type at runtime | |
| // No specific validation needed as it will fall back to file if API not configured | |
| break; | |
| default: | |
| errors.push(`Unknown storage type: ${storageType}`); | |
| } | |
| return { | |
| isValid: errors.length === 0, | |
| errors | |
| }; | |
| } |
🤖 Prompt for AI Agents
In packages/tm-core/src/storage/storage-factory.ts around lines 195 to 234, the
validator currently treats a missing storage.type as an error but the factory
defaults to 'auto' when type is absent; update validation to reflect that by not
returning an error when storage.type is undefined (treat undefined as 'auto') or
by adding a non-blocking advisory message stating default to 'auto' so tooling
won't show a false negative; adjust the initial check to only push an error when
type is explicitly invalid and ensure the returned isValid/errors reflect that
behavior.
| import type { | ||
| AIModel, | ||
| AIOptions, | ||
| AIResponse, | ||
| ProviderInfo, | ||
| ProviderUsageStats | ||
| } from '../../src/interfaces/ai-provider.interface'; | ||
| import { | ||
| BaseProvider, | ||
| type BaseProviderConfig, | ||
| type CompletionResult | ||
| } from '../../src/providers/ai/base-provider'; |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Fix ESM import extensions to match NodeNext style used elsewhere.
These two imports are missing the .js extension and will break under module: NodeNext (other TS files in the repo already use .js in imports).
Apply:
-} from '../../src/interfaces/ai-provider.interface';
+} from '../../src/interfaces/ai-provider.interface.js';
import {
BaseProvider,
type BaseProviderConfig,
type CompletionResult
-} from '../../src/providers/ai/base-provider';
+} from '../../src/providers/ai/base-provider.js';Run to confirm inconsistent imports:
🏁 Script executed:
rg -nP --type=ts "(from ['\"][^'\"]+)(?<!\.js)['\"]" packages/tm-core/tests/mocks/mock-provider.tsLength of output: 69
🏁 Script executed:
#!/bin/bash
# Verify import statements in mock-provider.ts for missing .js extensions
sed -n '1,30p' packages/tm-core/tests/mocks/mock-provider.tsLength of output: 622
🏁 Script executed:
#!/bin/bash
set -e
echo "TS imports with .js extension in imports:"
rg -nP --type=ts "import .* from ['\"][^'\"]+\\.js['\"]" || true
echo "----"
echo "TS test files under packages:"
find packages -type f -path "*/tests/*.ts" || trueLength of output: 9114
Include .js extensions on ESM imports in mock-provider.ts
Add the .js extension to both import paths to align with NodeNext resolution:
-import type {
- AIModel,
- AIOptions,
- AIResponse,
- ProviderInfo,
- ProviderUsageStats
-} from '../../src/interfaces/ai-provider.interface';
+import type {
+ AIModel,
+ AIOptions,
+ AIResponse,
+ ProviderInfo,
+ ProviderUsageStats
+} from '../../src/interfaces/ai-provider.interface.js';
-import {
- BaseProvider,
- type BaseProviderConfig,
- type CompletionResult
-} from '../../src/providers/ai/base-provider';
+import {
+ BaseProvider,
+ type BaseProviderConfig,
+ type CompletionResult
+} from '../../src/providers/ai/base-provider.js';Location: packages/tm-core/tests/mocks/mock-provider.ts
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import type { | |
| AIModel, | |
| AIOptions, | |
| AIResponse, | |
| ProviderInfo, | |
| ProviderUsageStats | |
| } from '../../src/interfaces/ai-provider.interface'; | |
| import { | |
| BaseProvider, | |
| type BaseProviderConfig, | |
| type CompletionResult | |
| } from '../../src/providers/ai/base-provider'; | |
| import type { | |
| AIModel, | |
| AIOptions, | |
| AIResponse, | |
| ProviderInfo, | |
| ProviderUsageStats | |
| } from '../../src/interfaces/ai-provider.interface.js'; | |
| import { | |
| BaseProvider, | |
| type BaseProviderConfig, | |
| type CompletionResult | |
| } from '../../src/providers/ai/base-provider.js'; |
🤖 Prompt for AI Agents
In packages/tm-core/tests/mocks/mock-provider.ts around lines 5–16, the ESM
import specifiers are missing the .js extension; update the two import paths
('../../src/interfaces/ai-provider.interface' and
'../../src/providers/ai/base-provider') to include the .js extension (e.g.
'../../src/interfaces/ai-provider.interface.js' and
'../../src/providers/ai/base-provider.js') so NodeNext resolution succeeds,
keeping the same imported symbols and types.
| // Return successful mock response | ||
| return { | ||
| content: `Mock response to: ${prompt}`, | ||
| inputTokens: this.calculateTokens(prompt), | ||
| outputTokens: this.calculateTokens(`Mock response to: ${prompt}`), | ||
| finishReason: 'complete', | ||
| model: this.model | ||
| }; | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Fallback to default model when this.model is unset.
Prevents undefined model values in results.
- model: this.model
+ model: this.model ?? this.getDefaultModel()And:
- model: this.model
+ model: this.model ?? this.getDefaultModel()Also applies to: 121-125
🤖 Prompt for AI Agents
In packages/tm-core/tests/mocks/mock-provider.ts around lines 74 to 82 (and also
apply same fix at 121-125), the mock response sets model to this.model which can
be undefined; update the return to use a fallback default (e.g., this.model ??
'mock-model' or a configured DEFAULT_MODEL constant) so the result always
includes a defined model string; change both return sites to use the fallback
expression and ensure any tests/consumers expect that default value.
| // Implement remaining abstract methods | ||
| async generateStreamingCompletion( | ||
| prompt: string, | ||
| _options?: AIOptions | ||
| ): AsyncIterator<Partial<AIResponse>> { | ||
| // Simple mock implementation | ||
| const response: Partial<AIResponse> = { | ||
| content: `Mock streaming response to: ${prompt}`, | ||
| provider: this.getName(), | ||
| model: this.model | ||
| }; | ||
|
|
||
| return { | ||
| async next() { | ||
| return { value: response, done: true }; | ||
| } | ||
| }; | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Streaming iterator returns done: true immediately — yields nothing.
The current AsyncIterator returns done: true on the first next(). Use an async generator and return AsyncIterableIterator.
-async generateStreamingCompletion(
- prompt: string,
- _options?: AIOptions
-): AsyncIterator<Partial<AIResponse>> {
- // Simple mock implementation
- const response: Partial<AIResponse> = {
- content: `Mock streaming response to: ${prompt}`,
- provider: this.getName(),
- model: this.model
- };
-
- return {
- async next() {
- return { value: response, done: true };
- }
- };
-}
+async *generateStreamingCompletion(
+ prompt: string,
+ _options?: AIOptions
+): AsyncIterableIterator<Partial<AIResponse>> {
+ yield {
+ content: `Mock streaming response to: ${prompt}`,
+ provider: this.getName(),
+ model: this.model ?? this.getDefaultModel()
+ };
+}Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In packages/tm-core/tests/mocks/mock-provider.ts around lines 115 to 132, the
generateStreamingCompletion implementation returns an AsyncIterator whose first
next() sets done: true so no value is yielded; replace it with an async
generator that yields the response and then completes, and update the return
type to AsyncIterableIterator<Partial<AIResponse>>. Concretely, change the
method to an async generator (async *) that constructs the response object and
yields it (so callers receive a value), then naturally returns to signal
completion; keep provider and model fields as before.
| // 4. Load environment variables (highest precedence) | ||
| const envConfig = this.envProvider.loadConfig(); | ||
| if (Object.keys(envConfig).length > 0) { | ||
| this.merger.addSource({ | ||
| name: 'environment', | ||
| config: envConfig, | ||
| precedence: CONFIG_PRECEDENCE.ENVIRONMENT | ||
| }); | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Apply environment runtime state (e.g., TASKMASTER_TAG) after loading persisted state.
Environment variables are highest precedence. Incorporate envProvider.getRuntimeState() to override state (activeTag).
// 6. Load runtime state
await this.stateManager.loadState();
+ // Environment runtime state (highest precedence)
+ const envState = this.envProvider.getRuntimeState();
+ if (envState.activeTag) {
+ await this.stateManager.setCurrentTag(envState.activeTag);
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // 4. Load environment variables (highest precedence) | |
| const envConfig = this.envProvider.loadConfig(); | |
| if (Object.keys(envConfig).length > 0) { | |
| this.merger.addSource({ | |
| name: 'environment', | |
| config: envConfig, | |
| precedence: CONFIG_PRECEDENCE.ENVIRONMENT | |
| }); | |
| } | |
| // 6. Load runtime state | |
| await this.stateManager.loadState(); | |
| // Environment runtime state (highest precedence) | |
| const envState = this.envProvider.getRuntimeState(); | |
| if (envState.activeTag) { | |
| await this.stateManager.setCurrentTag(envState.activeTag); | |
| } |
🤖 Prompt for AI Agents
In packages/tm-core/src/config/config-manager.ts around lines 109 to 117, the
code loads environment variables into the merger but does not apply environment
runtime state (e.g., TASKMASTER_TAG) to the manager's runtimeState after
persisted state is loaded; call this.envProvider.getRuntimeState() after
environment config is added, and if it returns values (such as activeTag), merge
or assign them into this.runtimeState (overriding the persisted values) so the
environment runtime state takes highest precedence for runtime fields like
activeTag.
| getModelConfig() { | ||
| return ( | ||
| this.config.models || { | ||
| main: 'claude-3-5-sonnet-20241022', | ||
| fallback: 'gpt-4o-mini' | ||
| } | ||
| ); | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Don’t hardcode model defaults here; delegate to loader defaults.
Keep defaults centralized to avoid drift and vendor lock-in in the manager layer.
-getModelConfig() {
- return (
- this.config.models || {
- main: 'claude-3-5-sonnet-20241022',
- fallback: 'gpt-4o-mini'
- }
- );
-}
+getModelConfig() {
+ return this.config.models || this.loader.getDefaultConfig().models;
+}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| getModelConfig() { | |
| return ( | |
| this.config.models || { | |
| main: 'claude-3-5-sonnet-20241022', | |
| fallback: 'gpt-4o-mini' | |
| } | |
| ); | |
| } | |
| getModelConfig() { | |
| return this.config.models || this.loader.getDefaultConfig().models; | |
| } |
🤖 Prompt for AI Agents
In packages/tm-core/src/config/config-manager.ts around lines 167 to 174, the
getModelConfig method currently returns hardcoded model defaults; remove those
literals and delegate to the centralized loader defaults instead. Change the
method to return this.config.models if present, otherwise call the config
loader’s default provider (e.g., ConfigLoader.getDefaultModelConfig() or
equivalent existing loader method) so defaults remain centralized; do not
introduce new hardcoded strings in the manager layer.
| async updateConfig(updates: PartialConfiguration): Promise<void> { | ||
| // Merge updates into current config | ||
| Object.assign(this.config, updates); | ||
|
|
||
| // Save to persistence | ||
| await this.persistence.saveConfig(this.config); | ||
|
|
||
| // Re-initialize to respect precedence | ||
| this.initialized = false; | ||
| await this.initialize(); | ||
| } |
There was a problem hiding this comment.
Shallow merge corrupts nested config (storage/models). Use deep merge.
Object.assign replaces nested objects wholesale (e.g., storage.basePath update drops storage.type). Use a deep merge and then reinit.
-// Merge updates into current config
-Object.assign(this.config, updates);
+// Deep-merge updates into current config
+this.config = this.deepMerge(this.config, updates as any);Add this helper inside the class:
// Simple deep merge for plain objects/arrays (replace arrays)
private deepMerge<T>(target: T, source: Partial<T>): T {
if (typeof target !== 'object' || target === null) return source as T;
if (typeof source !== 'object' || source === null) return target;
const out: any = Array.isArray(target) ? [...(source as any ?? [])] : { ...(target as any) };
for (const [k, v] of Object.entries(source as any)) {
if (v && typeof v === 'object' && !Array.isArray(v) && typeof (target as any)[k] === 'object' && !Array.isArray((target as any)[k])) {
out[k] = this.deepMerge((target as any)[k], v);
} else {
out[k] = v;
}
}
return out;
}🤖 Prompt for AI Agents
In packages/tm-core/src/config/config-manager.ts around lines 220 to 230, the
current updateConfig uses Object.assign which causes shallow merges that
overwrite nested objects (e.g., storage/models) and corrupt configuration;
replace the shallow merge with a deepMerge helper method on the class (handles
plain objects, replaces arrays) and call this.config =
this.deepMerge(this.config, updates) before saving and reinitializing; add the
provided deepMerge helper as a private method on the class and ensure
updateConfig uses it instead of Object.assign so nested keys are merged
correctly.
| async setResponseLanguage(language: string): Promise<void> { | ||
| if (!this.config.custom) { | ||
| this.config.custom = {}; | ||
| } | ||
| (this.config.custom as any).responseLanguage = language; | ||
| await this.persistence.saveConfig(this.config); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Persisting responseLanguage should refresh precedence/derived state.
After saving, reinitialize so getters reflect the new config consistently.
(this.config.custom as any).responseLanguage = language;
await this.persistence.saveConfig(this.config);
+this.initialized = false;
+await this.initialize();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async setResponseLanguage(language: string): Promise<void> { | |
| if (!this.config.custom) { | |
| this.config.custom = {}; | |
| } | |
| (this.config.custom as any).responseLanguage = language; | |
| await this.persistence.saveConfig(this.config); | |
| } | |
| async setResponseLanguage(language: string): Promise<void> { | |
| if (!this.config.custom) { | |
| this.config.custom = {}; | |
| } | |
| (this.config.custom as any).responseLanguage = language; | |
| await this.persistence.saveConfig(this.config); | |
| this.initialized = false; | |
| await this.initialize(); | |
| } |
🤖 Prompt for AI Agents
In packages/tm-core/src/config/config-manager.ts around lines 235 to 241, after
persisting the updated responseLanguage you must reinitialize the manager's
derived/precedence state so getters reflect the change; after await
this.persistence.saveConfig(this.config) call the existing
initialization/recompute method (e.g., this.initialize() or the specific method
that recomputes precedence/derived values) to refresh internal state, or if none
exists, invoke the logic that recalculates precedence and derived getters
immediately after saving.
| getConfigSources() { | ||
| return this.merger.getSources(); | ||
| } | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Expose return type for getConfigSources.
Minor: annotate return type for better DX.
-getConfigSources() {
+getConfigSources(): Array<{ name: string; precedence: number; config: unknown }> {
return this.merger.getSources();
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| getConfigSources() { | |
| return this.merger.getSources(); | |
| } | |
| } | |
| getConfigSources(): Array<{ name: string; precedence: number; config: unknown }> { | |
| return this.merger.getSources(); | |
| } | |
| } |
🤖 Prompt for AI Agents
In packages/tm-core/src/config/config-manager.ts around lines 276-279, the
getConfigSources method is missing an explicit return type; update the method
signature to include a precise return type (either the concrete exported type
used by merger.getSources if available or a TypeScript utility such as
ReturnType<typeof this.merger.getSources>) so callers get proper IDE/type
information, and keep the body returning this.merger.getSources() unchanged.
| async saveMetadata(_metadata: TaskMetadata, tag?: string): Promise<void> { | ||
| const tasks = await this.loadTasks(tag); | ||
| await this.saveTasks(tasks, tag); | ||
| } |
There was a problem hiding this comment.
Bug: saveMetadata ignores the provided metadata.
The _metadata parameter is unused; callers cannot persist metadata updates. Merge into the correct structure based on current format.
-async saveMetadata(_metadata: TaskMetadata, tag?: string): Promise<void> {
- const tasks = await this.loadTasks(tag);
- await this.saveTasks(tasks, tag);
-}
+async saveMetadata(metadata: TaskMetadata, tag?: string): Promise<void> {
+ const filePath = this.pathResolver.getTasksPath();
+ const resolvedTag = tag || 'master';
+ await this.fileOps.ensureDir(this.pathResolver.getTasksDir());
+
+ let existingData: any = {};
+ try {
+ existingData = await this.fileOps.readJson(filePath);
+ } catch (error: any) {
+ if (error.code !== 'ENOENT') {
+ throw new Error(`Failed to read existing metadata: ${error.message}`);
+ }
+ }
+
+ const now = new Date().toISOString();
+ const mergedMeta = { ...(metadata || {}), lastModified: metadata?.lastModified ?? now, tags: [resolvedTag] };
+
+ if (
+ this.formatHandler.detectFormat(existingData) === 'legacy' ||
+ Object.keys(existingData).some((k) => k !== 'tasks' && k !== 'metadata')
+ ) {
+ // Legacy format
+ if (!existingData[resolvedTag]) {
+ existingData[resolvedTag] = { tasks: [], metadata: mergedMeta };
+ } else {
+ existingData[resolvedTag].metadata = { ...(existingData[resolvedTag].metadata || {}), ...mergedMeta };
+ }
+ } else if (resolvedTag === 'master') {
+ // Standard format
+ existingData = {
+ tasks: existingData.tasks || [],
+ metadata: { ...(existingData.metadata || {}), ...mergedMeta }
+ };
+ } else {
+ // Convert to legacy to support non-master tag metadata
+ const masterTasks = existingData.tasks || [];
+ const masterMetadata = existingData.metadata || {};
+ existingData = {
+ master: { tasks: masterTasks, metadata: masterMetadata },
+ [resolvedTag]: { tasks: [], metadata: mergedMeta }
+ };
+ }
+
+ await this.fileOps.writeJson(filePath, existingData);
+}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async saveMetadata(_metadata: TaskMetadata, tag?: string): Promise<void> { | |
| const tasks = await this.loadTasks(tag); | |
| await this.saveTasks(tasks, tag); | |
| } | |
| async saveMetadata(metadata: TaskMetadata, tag?: string): Promise<void> { | |
| const filePath = this.pathResolver.getTasksPath(); | |
| const resolvedTag = tag || 'master'; | |
| await this.fileOps.ensureDir(this.pathResolver.getTasksDir()); | |
| let existingData: any = {}; | |
| try { | |
| existingData = await this.fileOps.readJson(filePath); | |
| } catch (error: any) { | |
| if (error.code !== 'ENOENT') { | |
| throw new Error(`Failed to read existing metadata: ${error.message}`); | |
| } | |
| } | |
| const now = new Date().toISOString(); | |
| const mergedMeta = { | |
| ...metadata, | |
| lastModified: metadata.lastModified ?? now, | |
| tags: [resolvedTag] | |
| }; | |
| if ( | |
| this.formatHandler.detectFormat(existingData) === 'legacy' || | |
| Object.keys(existingData).some((k) => k !== 'tasks' && k !== 'metadata') | |
| ) { | |
| // Legacy format: store per-tag buckets | |
| if (!existingData[resolvedTag]) { | |
| existingData[resolvedTag] = { tasks: [], metadata: mergedMeta }; | |
| } else { | |
| existingData[resolvedTag].metadata = { | |
| ...(existingData[resolvedTag].metadata || {}), | |
| ...mergedMeta | |
| }; | |
| } | |
| } else if (resolvedTag === 'master') { | |
| // Standard format: flat object for master | |
| existingData = { | |
| tasks: existingData.tasks || [], | |
| metadata: { ...(existingData.metadata || {}), ...mergedMeta } | |
| }; | |
| } else { | |
| // Convert to legacy when tagging non-master | |
| const masterTasks = existingData.tasks || []; | |
| const masterMetadata = existingData.metadata || {}; | |
| existingData = { | |
| master: { tasks: masterTasks, metadata: masterMetadata }, | |
| [resolvedTag]: { tasks: [], metadata: mergedMeta } | |
| }; | |
| } | |
| await this.fileOps.writeJson(filePath, existingData); | |
| } |
🤖 Prompt for AI Agents
In packages/tm-core/src/storage/file-storage/file-storage.ts around lines
239–242, saveMetadata currently ignores the _metadata parameter; change it to
load the stored tasks for the tag, merge the provided _metadata into the
existing metadata structure (preserving existing keys) on that tasks object
using a deep/structural merge appropriate to the current storage format, then
call saveTasks with the updated tasks and tag; also validate/normalize the
incoming metadata shape before merging and ensure no other fields are
overwritten accidentally.
| async appendTasks(tasks: Task[], tag?: string): Promise<void> { | ||
| const existingTasks = await this.loadTasks(tag); | ||
| const allTasks = [...existingTasks, ...tasks]; | ||
| await this.saveTasks(allTasks, tag); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Append should dedupe by task ID to avoid collisions.
Concatenating arrays can create duplicate IDs. Dedupe by id, last write wins.
-const existingTasks = await this.loadTasks(tag);
-const allTasks = [...existingTasks, ...tasks];
-await this.saveTasks(allTasks, tag);
+const existingTasks = await this.loadTasks(tag);
+const map = new Map<string, Task>(existingTasks.map((t) => [String(t.id), t]));
+for (const t of tasks) map.set(String(t.id), t);
+await this.saveTasks([...map.values()], tag);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async appendTasks(tasks: Task[], tag?: string): Promise<void> { | |
| const existingTasks = await this.loadTasks(tag); | |
| const allTasks = [...existingTasks, ...tasks]; | |
| await this.saveTasks(allTasks, tag); | |
| } | |
| async appendTasks(tasks: Task[], tag?: string): Promise<void> { | |
| const existingTasks = await this.loadTasks(tag); | |
| const map = new Map<string, Task>( | |
| existingTasks.map(t => [String(t.id), t]) | |
| ); | |
| for (const t of tasks) { | |
| map.set(String(t.id), t); | |
| } | |
| await this.saveTasks([...map.values()], tag); | |
| } |
🤖 Prompt for AI Agents
In packages/tm-core/src/storage/file-storage/file-storage.ts around lines 247 to
251, appendTasks currently concatenates existingTasks and new tasks which can
produce duplicate task IDs; change appendTasks to merge by task id with "last
write wins" semantics: load existing tasks, iterate combined list in order where
new tasks override existing ones (e.g., use a Map keyed by id, writing existing
first then new to overwrite, or write combined and then reduce keeping the last
occurrence), then save the deduped array preserving intended order before
calling saveTasks.
| async copyTag(sourceTag: string, targetTag: string): Promise<void> { | ||
| const tasks = await this.loadTasks(sourceTag); | ||
|
|
||
| if (tasks.length === 0) { | ||
| throw new Error(`Source tag ${sourceTag} not found or has no tasks`); | ||
| } | ||
|
|
||
| await this.saveTasks(tasks, targetTag); | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Copy should preserve metadata.
Currently only tasks are copied. Copy source metadata too.
-const tasks = await this.loadTasks(sourceTag);
+const tasks = await this.loadTasks(sourceTag);
+const meta = await this.loadMetadata(sourceTag);
@@
-await this.saveTasks(tasks, targetTag);
+await this.saveTasks(tasks, targetTag);
+if (meta) {
+ await this.saveMetadata({ ...meta, tags: [targetTag], lastModified: new Date().toISOString() }, targetTag);
+}🤖 Prompt for AI Agents
In packages/tm-core/src/storage/file-storage/file-storage.ts around lines 372 to
380, the copyTag implementation only copies tasks but not the tag metadata; load
the source tag's metadata (e.g., call existing loadTagMetadata/loadTag or read
the metadata file for sourceTag), validate it exists, then write that metadata
to the targetTag (e.g., call saveTagMetadata or write the metadata file
alongside saveTasks) before or after saving tasks so the target tag ends up with
the same metadata as the source; ensure you handle the case where metadata is
missing (either create default metadata or surface an error consistent with
existing behavior) and preserve timestamps/IDs exactly as stored.
| export function isTaskStatus(value: unknown): value is TaskStatus { | ||
| return ( | ||
| typeof value === 'string' && | ||
| [ | ||
| 'pending', | ||
| 'in-progress', | ||
| 'done', | ||
| 'deferred', | ||
| 'cancelled', | ||
| 'blocked', | ||
| 'review' | ||
| ].includes(value) | ||
| ); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Avoid repeating literal lists in guards.
Optional: export constants (e.g., TASK_STATUSES) and reuse in guards to prevent drift.
If you want, I can follow up with a small patch adding exported readonly arrays and reusing them here.
🤖 Prompt for AI Agents
In packages/tm-core/src/types/index.ts around lines 148 to 161, the isTaskStatus
guard repeats the literal list of statuses; extract those string literals into a
single exported readonly constant (e.g., TASK_STATUSES: readonly TaskStatus[])
and then reuse that constant inside isTaskStatus (checking typeof value ===
'string' && TASK_STATUSES.includes(value as TaskStatus)); export the constant so
other modules can reuse it and prevent drift.
| export function isTask(obj: unknown): obj is Task { | ||
| if (!obj || typeof obj !== 'object') return false; | ||
| const task = obj as Record<string, unknown>; | ||
|
|
||
| return ( | ||
| typeof task.id === 'string' && | ||
| typeof task.title === 'string' && | ||
| typeof task.description === 'string' && | ||
| isTaskStatus(task.status) && | ||
| isTaskPriority(task.priority) && | ||
| Array.isArray(task.dependencies) && | ||
| typeof task.details === 'string' && | ||
| typeof task.testStrategy === 'string' && | ||
| Array.isArray(task.subtasks) | ||
| ); | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Strengthen isTask guard: validate dependency element types and subtask shapes.
Currently only array-ness is checked. Also verify every dependency is string and each subtask passes isSubtask.
return (
typeof task.id === 'string' &&
typeof task.title === 'string' &&
typeof task.description === 'string' &&
isTaskStatus(task.status) &&
isTaskPriority(task.priority) &&
- Array.isArray(task.dependencies) &&
+ Array.isArray(task.dependencies) &&
+ (task.dependencies as unknown[]).every((d) => typeof d === 'string') &&
typeof task.details === 'string' &&
typeof task.testStrategy === 'string' &&
- Array.isArray(task.subtasks)
+ Array.isArray(task.subtasks) &&
+ (task.subtasks as unknown[]).every(isSubtask)
);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export function isTask(obj: unknown): obj is Task { | |
| if (!obj || typeof obj !== 'object') return false; | |
| const task = obj as Record<string, unknown>; | |
| return ( | |
| typeof task.id === 'string' && | |
| typeof task.title === 'string' && | |
| typeof task.description === 'string' && | |
| isTaskStatus(task.status) && | |
| isTaskPriority(task.priority) && | |
| Array.isArray(task.dependencies) && | |
| typeof task.details === 'string' && | |
| typeof task.testStrategy === 'string' && | |
| Array.isArray(task.subtasks) | |
| ); | |
| } | |
| export function isTask(obj: unknown): obj is Task { | |
| if (!obj || typeof obj !== 'object') return false; | |
| const task = obj as Record<string, unknown>; | |
| return ( | |
| typeof task.id === 'string' && | |
| typeof task.title === 'string' && | |
| typeof task.description === 'string' && | |
| isTaskStatus(task.status) && | |
| isTaskPriority(task.priority) && | |
| Array.isArray(task.dependencies) && | |
| // ensure every dependency is a string task ID | |
| (task.dependencies as unknown[]).every((d) => typeof d === 'string') && | |
| typeof task.details === 'string' && | |
| typeof task.testStrategy === 'string' && | |
| Array.isArray(task.subtasks) && | |
| // ensure every subtask matches the Subtask guard | |
| (task.subtasks as unknown[]).every(isSubtask) | |
| ); | |
| } |
🤖 Prompt for AI Agents
In packages/tm-core/src/types/index.ts around lines 186 to 201, the isTask type
guard only checks that dependencies and subtasks are arrays but not their
element shapes; update the guard so dependencies.every(el => typeof el ===
'string') and subtasks.every(el => isSubtask(el)) are required, and ensure
isSubtask is imported/visible in this module; keep the existing checks and
return false if any element fails validation.
There was a problem hiding this comment.
Actionable comments posted: 58
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/cli/src/index.ts (1)
14-21: Optionally re-export ContextResult for programmatic consumersSmall DX win for users importing types from @tm/cli.
export type { Task, TaskStatus, TaskPriority, TaskMasterCore } from '@tm/core'; +export type { ContextResult } from './commands/context.command.js';
♻️ Duplicate comments (58)
packages/tm-core/tsconfig.json (2)
31-39: Paths won’t resolve without baseUrl. Add it.TypeScript ignores "paths" unless "baseUrl" is set.
9-12: Fix: define baseUrl at the package root.Sets anchor for "@/..." mappings defined below.
"outDir": "./dist", "rootDir": "./src", + "baseUrl": ".", "strict": true,packages/tm-core/src/config/services/environment-config-provider.service.ts (4)
30-45: Adopt centralized logger, redact values, and surface allowed storage types (include 'auto').Switch to getLogger, avoid leaking env values, and make the validator/message reference a single ALLOWED list that includes 'auto' to match ConfigManager behavior.
@@ -import type { PartialConfiguration } from '../../interfaces/configuration.interface.js'; +import type { PartialConfiguration } from '../../interfaces/configuration.interface.js'; +import { getLogger } from '../../logger/index.js'; @@ export class EnvironmentConfigProvider { + private static readonly ALLOWED_STORAGE_TYPES = ['file','api','auto'] as const; @@ private static readonly DEFAULT_MAPPINGS: EnvMapping[] = [ { env: 'TASKMASTER_STORAGE_TYPE', path: ['storage', 'type'], - validate: (v: string) => ['file', 'api'].includes(v) + validate: (v: string) => EnvironmentConfigProvider.ALLOWED_STORAGE_TYPES.includes(v as any) }, @@ ]; @@ private mappings: EnvMapping[]; + private readonly logger = getLogger('EnvironmentConfigProvider'); @@ - if (mapping.validate && !mapping.validate(value)) { - console.warn(`Invalid value for ${mapping.env}: ${value}`); - continue; - } + if (mapping.validate && !mapping.validate(value)) { + this.logger.warn( + `Invalid value for ${mapping.env} (redacted). Allowed: ${EnvironmentConfigProvider.ALLOWED_STORAGE_TYPES.join('|')}.` + ); + continue; + }Also applies to: 76-80
119-128: Harden nested assignment to avoid clobbering non-object leaves.Prevents data loss when multiple mappings share prefixes; logs and skips on conflict.
- let current = obj; - for (const key of keys) { - if (!current[key]) { - current[key] = {}; - } - current = current[key]; - } - - current[lastKey] = value; + let current: Record<string, unknown> = obj as any; + for (const key of keys) { + const existing = (current as any)[key]; + if (existing === undefined) { + (current as any)[key] = {}; + } else if (typeof existing !== 'object' || existing === null || Array.isArray(existing)) { + this.logger.warn(`Cannot set nested path "${path.join('.')}" because "${key}" is not an object; skipping`); + return; + } + current = (current as any)[key] as Record<string, unknown>; + } + (current as any)[lastKey] = value;
133-135: Use own-property check for env presence.Avoids inherited props; preserves empty-string as “set”.
- return envName in process.env && process.env[envName] !== undefined; + return Object.prototype.hasOwnProperty.call(process.env, envName) && + process.env[envName] !== undefined;
156-158: Prevent duplicate mappings for the same env var.Replace existing entry instead of pushing duplicates.
- this.mappings.push(mapping); + const idx = this.mappings.findIndex(m => m.env === mapping.env); + if (idx >= 0) { + this.mappings[idx] = mapping; + } else { + this.mappings.push(mapping); + }packages/tm-core/src/config/config-manager.spec.ts (5)
67-78: Fix RuntimeStateManager mock API to match SUT (getCurrentTag/setCurrentTag).Prevents runtime TypeError and assertion failures.
vi.mocked(RuntimeStateManager).mockImplementation( () => ({ loadState: vi.fn().mockResolvedValue({ activeTag: 'master' }), saveState: vi.fn().mockResolvedValue(undefined), - getActiveTag: vi.fn().mockReturnValue('master'), - setActiveTag: vi.fn().mockResolvedValue(undefined), + getCurrentTag: vi.fn().mockReturnValue('master'), + setCurrentTag: vi.fn().mockResolvedValue(undefined), getState: vi.fn().mockReturnValue({ activeTag: 'master' }), updateMetadata: vi.fn().mockResolvedValue(undefined), clearState: vi.fn().mockResolvedValue(undefined) }) as any );
178-181: Align storage shape with ConfigManager.getStorageConfig().Base path and apiConfigured are part of the contract.
- expect(storage).toEqual({ type: 'file' }); + expect(storage).toEqual({ + type: 'file', + basePath: testProjectRoot, + apiConfigured: false + });
205-211: Include basePath and apiConfigured in API storage expectation.Matches SUT return type.
expect(storage).toEqual({ type: 'api', apiEndpoint: 'https://api.example.com', - apiAccessToken: 'token123' + apiAccessToken: 'token123', + apiConfigured: true, + basePath: testProjectRoot });
280-285: Assertion should target setCurrentTag, not setActiveTag.Matches SUT’s method call.
- expect(stateManager.setActiveTag).toHaveBeenCalledWith('feature-branch'); + expect(stateManager.setCurrentTag).toHaveBeenCalledWith('feature-branch');
364-379: watch() is not implemented in SUT; either drop test or add a stub.Current test will fail at compile-time. Prefer adding a no-op stub in ConfigManager.
Add to packages/tm-core/src/config/config-manager.ts:
// In class ConfigManager watch(_cb: (cfg: PartialConfiguration) => void): () => void { console.warn('Configuration watching not yet implemented'); return () => {}; }packages/tm-core/src/config/services/environment-config-provider.service.spec.ts (5)
63-77: If you switch to getLogger in the SUT, update tests to mock and assert logger.warn.Preps tests for redacted message and allowed values list.
+// add near top after imports +const warnMock = vi.fn(); +vi.mock('../../logger/index.js', () => ({ getLogger: () => ({ warn: warnMock }) })); @@ - const warnSpy = vi.spyOn(console, 'warn').mockImplementation(() => {}); + warnMock.mockReset(); @@ - expect(warnSpy).toHaveBeenCalledWith( - 'Invalid value for TASKMASTER_STORAGE_TYPE: invalid' - ); + expect(warnMock).toHaveBeenCalledWith( + 'Invalid value for TASKMASTER_STORAGE_TYPE (redacted). Allowed: file|api|auto.' + ); @@ - const warnSpy = vi.spyOn(console, 'warn').mockImplementation(() => {}); + warnMock.mockReset(); @@ - expect(warnSpy).toHaveBeenCalled(); - warnSpy.mockRestore(); + expect(warnMock).toHaveBeenCalled();Also applies to: 257-271
22-25: Do not reassign process.env; restore in-place.Prevents subtle issues with cached references.
-afterEach(() => { - // Restore original environment - process.env = { ...originalEnv }; -}); +afterEach(() => { + // Restore original environment without reassigning process.env + for (const key of Object.keys(process.env)) { + if (!Object.prototype.hasOwnProperty.call(originalEnv, key)) delete process.env[key]; + } + for (const [k, v] of Object.entries(originalEnv)) { + if (v === undefined) delete process.env[k]; + else process.env[k] = v; + } +});
164-168: Fix “undefined” env var test; delete the key instead.Assigning undefined produces the string "undefined" in Node.
- process.env.TASKMASTER_TEST = undefined as any; + delete process.env.TASKMASTER_TEST;
230-233: Assert nested value directly for clarity.- expect(config).toHaveProperty('new.mapping', 'new-value'); + expect(config.new?.mapping).toBe('new-value');
281-308: Const reassignment of customProvider breaks compilation.Use let if you reassign.
- const customProvider = new EnvironmentConfigProvider([ + let customProvider = new EnvironmentConfigProvider([ @@ - customProvider = new EnvironmentConfigProvider([ + customProvider = new EnvironmentConfigProvider([packages/tm-core/tsup.config.ts (2)
13-15: Avoid duplicate type declaration emission.You already have this noted in a prior comment; reiterating to prevent CI noise and stale .d.ts.
If you keep tsup to emit types, disable tsconfig declarations:
# packages/tm-core/tsconfig.json { "compilerOptions": { - "declaration": true, - "emitDeclarationOnly": true, + "declaration": false, + "emitDeclarationOnly": false, ... } }
24-26: Don’t force ESM resolution for the CJS build (breaks require()).Setting conditions=['module'] can pick ESM-only export paths during the CJS build.
Minimal fix:
- esbuildOptions(options) { - options.conditions = ['module']; - } + // Avoid forcing ESM-only resolution; let tsup/esbuild defaults handle both formats. + esbuildOptions() {},If you truly need 'module' condition, split builds:
// Alternative full config sketch: export default defineConfig([ { /* ESM */ format: ['esm'], entry: {...}, dts: true, /* ... */ esbuildOptions(o){ o.conditions = ['module']; } }, { /* CJS */ format: ['cjs'], entry: {...}, dts: true, /* ... */ } // no conditions override ]);packages/tm-core/src/types/index.ts (2)
12-20: DRY the literal lists: extract readonly arrays and derive unions + guards from them.Avoid repeating string literals in types and guards; use single sources of truth.
-export type TaskStatus = - | 'pending' - | 'in-progress' - | 'done' - | 'deferred' - | 'cancelled' - | 'blocked' - | 'review'; +export const TASK_STATUSES = [ + 'pending', + 'in-progress', + 'done', + 'deferred', + 'cancelled', + 'blocked', + 'review' +] as const; +export type TaskStatus = (typeof TASK_STATUSES)[number]; -export type TaskPriority = 'low' | 'medium' | 'high' | 'critical'; +export const TASK_PRIORITIES = ['low', 'medium', 'high', 'critical'] as const; +export type TaskPriority = (typeof TASK_PRIORITIES)[number]; -export type TaskComplexity = 'simple' | 'moderate' | 'complex' | 'very-complex'; +export const TASK_COMPLEXITIES = ['simple', 'moderate', 'complex', 'very-complex'] as const; +export type TaskComplexity = (typeof TASK_COMPLEXITIES)[number]; export function isTaskStatus(value: unknown): value is TaskStatus { - return ( - typeof value === 'string' && - [ - 'pending','in-progress','done','deferred','cancelled','blocked','review' - ].includes(value) - ); + return typeof value === 'string' && TASK_STATUSES.includes(value as TaskStatus); } export function isTaskPriority(value: unknown): value is TaskPriority { - return ( - typeof value === 'string' && - ['low', 'medium', 'high', 'critical'].includes(value) - ); + return typeof value === 'string' && TASK_PRIORITIES.includes(value as TaskPriority); } export function isTaskComplexity(value: unknown): value is TaskComplexity { - return ( - typeof value === 'string' && - ['simple', 'moderate', 'complex', 'very-complex'].includes(value) - ); + return typeof value === 'string' && TASK_COMPLEXITIES.includes(value as TaskComplexity); }Also applies to: 24-30, 157-170, 175-190
195-210: Strengthen isTask: validate element shapes for dependencies/subtasks.Also reuses isSubtask for subtask validation.
return ( typeof task.id === 'string' && typeof task.title === 'string' && typeof task.description === 'string' && isTaskStatus(task.status) && isTaskPriority(task.priority) && - Array.isArray(task.dependencies) && + Array.isArray(task.dependencies) && + (task.dependencies as unknown[]).every((d) => typeof d === 'string') && typeof task.details === 'string' && typeof task.testStrategy === 'string' && - Array.isArray(task.subtasks) + Array.isArray(task.subtasks) && + (task.subtasks as unknown[]).every(isSubtask) );packages/tm-core/src/services/index.ts (1)
6-8: Barrel incomplete: re-export TaskService option/result types for consumers.Expose TaskListResult and GetTaskListOptions alongside TaskService.
export { TaskService } from './task-service.js'; +export type { TaskListResult, GetTaskListOptions } from './task-service.js'; export { OrganizationService } from './organization.service.js'; export type { Organization, Brief } from './organization.service.js';scripts/modules/commands.js (1)
17-20: Build break: replace static '@tm/cli' import with guarded dynamic importStatic import triggers “Cannot find module '@tm/cli'” in CI when the optional package isn’t present. Load it dynamically and degrade gracefully. This repeats a prior note.
Apply within this hunk:
-import { log, readJSON } from './utils.js'; -// Import new commands from @tm/cli -import { ListTasksCommand, AuthCommand, ContextCommand } from '@tm/cli'; +import { log, readJSON } from './utils.js'; +// Optional external CLI commands are loaded dynamically below; see guarded import.Add near top-level (after imports) to load optionally:
// Optional commands from @tm/cli (loaded lazily to avoid hard dependency) let ListTasksCommand, AuthCommand, ContextCommand; try { ({ ListTasksCommand, AuthCommand, ContextCommand } = await import('@tm/cli')); } catch { log?.('debug', "Optional package '@tm/cli' not found; skipping external CLI command registrations."); }packages/tm-core/package.json (2)
15-27: Add standardized test script variantsPer repo test workflow, include unit/integration/e2e/CI aliases.
"scripts": { "build": "tsup", "dev": "tsup --watch", "test": "vitest run", "test:watch": "vitest", "test:coverage": "vitest run --coverage", + "test:unit": "vitest run", + "test:integration": "vitest run --dir tests/integration", + "test:e2e": "vitest run --dir tests/e2e", + "test:ci": "vitest run --coverage", "lint": "biome check --write", "lint:check": "biome check", "lint:fix": "biome check --fix --unsafe", "format": "biome format --write", "format:check": "biome format", "typecheck": "tsc --noEmit" },
6-13: Align types and exports; fix CJS “require” mapping or go ESM-only
- Prefer “types” to src for internal packages (per repo DX preference).
- Current exports.require points to an ESM file; CJS consumers will break. Choose:
- Dual build: emit CJS and point require -> .cjs, or
- ESM-only: drop all “require” keys.
Recommend dual build + add subpath export for auth.
- "types": "./dist/index.d.ts", + "types": "./src/index.ts", "main": "./dist/index.js", "exports": { - ".": { - "types": "./src/index.ts", - "import": "./dist/index.js", - "require": "./dist/index.js" - } + ".": { + "types": "./src/index.ts", + "import": "./dist/index.js", + "require": "./dist/index.cjs" + }, + "./auth": { + "types": "./src/auth/index.ts", + "import": "./dist/auth/index.js", + "require": "./dist/auth/index.cjs" + } },Also update build to emit both formats (tsup or config):
- "build": "tsup", + "build": "tsup --format esm,cjs --dts",If you opt for ESM-only, instead apply:
"exports": { ".": { "types": "./src/index.ts", "import": "./dist/index.js", - "require": "./dist/index.js" + // remove "require" for ESM-only } },packages/tm-core/src/task-master-core.ts (2)
34-58: Replacenull as anywith definite assignment; keep ctor empty.Removes unsound casts and preserves TS safety.
export class TaskMasterCore { - private configManager: ConfigManager; - private taskService: TaskService; + private configManager!: ConfigManager; + private taskService!: TaskService; @@ - private constructor() { - // Services will be initialized in the initialize() method - this.configManager = null as any; - this.taskService = null as any; - } + private constructor() {}
176-178: close() is a no-op; delegate cleanup to TaskService.Ensure resources (file handles, DB clients, watchers) are released.
async close(): Promise<void> { - // TaskService handles storage cleanup internally + await (this.taskService as unknown as { close?: () => Promise<void> })?.close?.(); }apps/cli/package.json (3)
7-13: Unify typings to source for better DX (internal package convention).- "types": "./dist/index.d.ts", + "types": "./src/index.ts", "exports": { ".": { - "types": "./src/index.ts", + "types": "./src/index.ts", "import": "./dist/index.js", - "require": "./dist/index.js" + "require": "./dist/index.js" } },
8-13: Fix or drop CJS export; build emits ESM only.If tsup doesn’t emit CJS,
"require"to./dist/index.jswill breakrequire(). Either emit dual formats or remove therequirecondition.Option A (dual build):
- Update tsup to
format: ['esm','cjs'], point require to./dist/index.cjs.Option B (ESM-only, simplest) – apply:
"exports": { ".": { "types": "./src/index.ts", "import": "./dist/index.js", - "require": "./dist/index.js" } },
1-15: Add CLI bin shims (task-master / tm).Enables
npx task-masteror global use.{ "name": "@tm/cli", @@ - "exports": { + "exports": { @@ }, + "bin": { + "task-master": "./dist/index.js", + "tm": "./dist/index.js" + },Ensure
#!/usr/bin/env nodeshebang is preserved indist/index.js(tsupbanneror--shims).packages/tm-core/src/auth/oauth-service.ts (11)
150-151: Use 127.0.0.1 in callback URL for consistency and fewer IPv6/hosts pitfalls.
[Suggest_nitpick]- const callbackUrl = `http://localhost:${port}/callback`; + const callbackUrl = `http://127.0.0.1:${port}/callback`;
299-307: Success (PKCE): reset flow state after resolving.
[Suggest_essential_refactor]if (server.listening) { server.close(); } // Clear timeout since authentication succeeded if (timeoutId) { clearTimeout(timeoutId); } + this.authorizationUrl = null; + this.authorizationReady = null; + this.originalState = null; resolve(authData); return;
361-369: Success (token transfer): reset flow state after resolving.
[Suggest_essential_refactor]if (server.listening) { server.close(); } // Clear timeout since authentication succeeded if (timeoutId) { clearTimeout(timeoutId); } + this.authorizationUrl = null; + this.authorizationReady = null; + this.originalState = null; resolve(authData);
375-380: No token received: clear timeout and reset state.
[Suggest_essential_refactor]} else { if (server.listening) { server.close(); } + if (timeoutId) clearTimeout(timeoutId); + this.authorizationUrl = null; + this.authorizationReady = null; + this.originalState = null; reject(new AuthenticationError('No access token received', 'NO_TOKEN')); }
20-20: Decouple from monorepo: stop importing root package.json; read CLI version from env.Avoid brittle path/import and bundler issues. Use TASKMASTER_CLI_VERSION. (Using prior guidance.)
-import packageJson from '../../../../package.json' with { type: 'json' };- private getCliVersion(): string { - return packageJson.version || 'unknown'; - } + private getCliVersion(): string { + return process.env.TASKMASTER_CLI_VERSION ?? 'unknown'; + }Also applies to: 391-395
27-31: Guard against concurrent authenticate() calls and always clear the flag.Current instance fields allow races. Add in-progress guard and reset in finally. (Using prior guidance.)
class OAuthService { private logger = getLogger('OAuthService'); private credentialStore: CredentialStore; private supabaseClient: SupabaseAuthClient; private baseUrl: string; private authorizationUrl: string | null = null; private originalState: string | null = null; private authorizationReady: Promise<void> | null = null; private resolveAuthorizationReady: (() => void) | null = null; + private inProgress = false;- async authenticate(options: OAuthFlowOptions = {}): Promise<AuthCredentials> { + async authenticate(options: OAuthFlowOptions = {}): Promise<AuthCredentials> { + if (this.inProgress) { + throw new AuthenticationError('Authentication already in progress', 'ALREADY_IN_PROGRESS'); + }- try { + this.inProgress = true; + try { // Start the OAuth flow (starts local server) const authPromise = this.startFlow(timeout);- } catch (error) { + } catch (error) { const authError = error instanceof AuthenticationError ? error : new AuthenticationError( `OAuth authentication failed: ${(error as Error).message}`, 'OAUTH_FAILED', error ); // Notify error if (onError) { onError(authError); } throw authError; - } + } finally { + this.inProgress = false; + }Also applies to: 45-46, 55-55, 119-121
140-148: Add server error handler to avoid hanging promise and leaked server.Bind an 'error' listener before/after listen to reject/cleanup. (Using prior guidance.)
// Create local HTTP server for OAuth callback const server = http.createServer(); + server.on('error', (err) => { + try { if (server.listening) server.close(); } catch {} + if (this.resolveAuthorizationReady) { this.resolveAuthorizationReady(); this.resolveAuthorizationReady = null; } + reject(new AuthenticationError(`Auth callback server error: ${String((err as Error).message)}`, 'SERVER_ERROR', err)); + });
172-182: Redact secrets and drop PII from cliData and logs.Do not transmit or log state/host/user; log a redacted summary. (Using prior guidance.)
// Prepare CLI data object (server handles OAuth/PKCE) const cliData: CliData = { callback: callbackUrl, - state: state, + state: state, // kept for server use; never log name: 'Task Master CLI', version: this.getCliVersion(), - device: os.hostname(), - user: os.userInfo().username, platform: os.platform(), timestamp: Date.now() }; @@ - this.logger.debug('CLI data:', cliData); + this.logger.debug('CLI data prepared (redacted):', { + name: cliData.name, + version: cliData.version, + platform: cliData.platform, + timestamp: cliData.timestamp, + state: '<redacted>', + callback: '<redacted>' + });Also applies to: 198-202
210-224: On timeout: clear state to allow subsequent attempts.Currently rejects without resetting internal fields. (Using prior guidance.)
// Set timeout for authentication timeoutId = setTimeout(() => { if (server.listening) { server.close(); // Clean up the readiness promise if still pending if (this.resolveAuthorizationReady) { this.resolveAuthorizationReady(); this.resolveAuthorizationReady = null; } - reject( - new AuthenticationError('Authentication timeout', 'AUTH_TIMEOUT') - ); + this.authorizationUrl = null; + this.authorizationReady = null; + this.originalState = null; + reject(new AuthenticationError('Authentication timeout', 'AUTH_TIMEOUT')); } }, timeout);
251-261: On error callback: clear timeout and reset flow state.if (error) { if (server.listening) { server.close(); } + if (timeoutId) clearTimeout(timeoutId); + this.authorizationUrl = null; + this.authorizationReady = null; + this.originalState = null; reject( new AuthenticationError( errorDescription || error || 'Authentication failed', 'OAUTH_ERROR' ) ); return; }
264-273: Invalid CSRF state: also clear timeout and reset state.if (returnedState !== this.originalState) { if (server.listening) { server.close(); } + if (timeoutId) clearTimeout(timeoutId); + this.authorizationUrl = null; + this.authorizationReady = null; + this.originalState = null; reject( new AuthenticationError('Invalid state parameter', 'INVALID_STATE') ); return; }packages/tm-core/src/clients/supabase-client.ts (1)
239-257: Optional: unify unexpected error handling in getUser() with other methods.Throw AuthenticationError in catch to align with exchange/refresh/setSession behavior. (Previously noted.)
- } catch (error) { - this.logger.error('Error getting user:', error); - return null; - } + } catch (error) { + this.logger.error('Error getting user:', error); + throw new AuthenticationError( + `Error getting user: ${(error as Error).message}`, + 'USER_FETCH_FAILED' + ); + }packages/tm-core/src/auth/types.ts (3)
10-14: Standardize expiresAt and relax tokenTypeUse a single numeric epoch-ms for expiresAt and a broader tokenType to avoid brittle consumers.
Apply:
export interface AuthCredentials { token: string; refreshToken?: string; userId: string; email?: string; - expiresAt?: string | number; - tokenType?: 'standard'; + /** Expiration timestamp in epoch milliseconds */ + expiresAt?: number; + /** Token flavor; e.g., 'standard' | 'apiKey' | 'bearer' */ + tokenType?: string; savedAt: string; selectedContext?: UserContext; }
24-37: Broaden onError type; clarify timeout defaultMake onError accept unknown and align the timeout doc with the runtime default.
export interface OAuthFlowOptions { /** Callback to open the browser with the auth URL. If not provided, browser won't be opened */ openBrowser?: (url: string) => Promise<void>; - /** Timeout for the OAuth flow in milliseconds. Default: 300000 (5 minutes) */ + /** Timeout in ms (default: 300_000 = 5 minutes) */ timeout?: number; /** Callback to be invoked with the authorization URL */ onAuthUrl?: (url: string) => void; /** Callback to be invoked when waiting for authentication */ onWaitingForAuth?: () => void; /** Callback to be invoked on successful authentication */ onSuccess?: (credentials: AuthCredentials) => void; /** Callback to be invoked on authentication error */ - onError?: (error: AuthenticationError) => void; + onError?: (error: unknown) => void; }
88-100: Use native Error cause; preserve prototype (drop manual stack surgery)Leverage ErrorOptions and keep instanceof working across transpilation targets.
export class AuthenticationError extends Error { constructor( message: string, public code: AuthErrorCode, public cause?: unknown ) { - super(message); - this.name = 'AuthenticationError'; - if (cause && cause instanceof Error) { - this.stack = `${this.stack}\nCaused by: ${cause.stack}`; - } + super(message, cause ? { cause } : undefined); + Object.setPrototypeOf(this, new.target.prototype); + this.name = 'AuthenticationError'; } }packages/tm-core/src/storage/storage-factory.ts (5)
33-35: Clarify comment: projectPath ≠ projectIdAvoid implying we pass a projectId here; we pass projectPath for later derivation.
- // Wrap the storage config in the expected format, including projectPath - // This ensures ApiStorage receives the projectPath for projectId + // Wrap storage config with projectPath (used to derive a stable projectId for API storage)
60-93: DRY: extract helper to merge auth token into storage configThe 'api' branch duplicates the same merge as the 'auto' path. Extract a withAuth helper to reduce drift.
+ private static withAuth( + config: Partial<IConfiguration>, + token: string + ): void { + const nextStorage: StorageSettings = { + ...(config.storage as StorageSettings), + type: 'api', + apiAccessToken: token, + apiEndpoint: + config.storage?.apiEndpoint || + process.env.HAMSTER_API_URL || + 'https://tryhamster.com/api' + }; + config.storage = nextStorage; + }Then replace inline merges here and in the 'auto' branch with:
StorageFactory.withAuth(config, credentials.token);
224-226: Add safe projectId derivation helperPrevent leaking local paths and ensure API-safe identifier.
static isHamsterAvailable(config: Partial<IConfiguration>): boolean { return !!(config.storage?.apiEndpoint && config.storage?.apiAccessToken); } + + /** Derive a stable, API-safe project identifier from a filesystem path */ + private static deriveProjectId(p?: string): string { + if (!p) return 'default'; + try { + const base = p.split(/[\\/]/).pop() || 'default'; + return base.replace(/[^a-zA-Z0-9-_]/g, '-').slice(0, 64); + } catch { + return 'default'; + } + }
187-190: Validation: missing storage.type should be valid (defaults to 'auto')Current validator rejects undefined type while factory treats it as 'auto'.
- if (!storageType) { - errors.push('Storage type is not specified'); - return { isValid: false, errors }; - } + if (!storageType) { + // Missing is acceptable; factory defaults to 'auto' + return { isValid: true, errors }; + }
151-162: Bug: projectId uses config.projectPath; pass method param and derive safe slugcreateApiStorage ignores the projectPath argument and may leak filesystem paths. Thread projectPath through and sanitize.
- private static createApiStorage(config: Partial<IConfiguration>): ApiStorage { + private static createApiStorage( + config: Partial<IConfiguration>, + projectPath: string + ): ApiStorage { // Use our SupabaseAuthClient instead of creating a raw Supabase client const supabaseAuthClient = new SupabaseAuthClient(); const supabaseClient = supabaseAuthClient.getClient(); return new ApiStorage({ supabaseClient, - projectId: config.projectPath || '', + projectId: StorageFactory.deriveProjectId(projectPath || config.projectPath || ''), enableRetry: config.retry?.retryOnNetworkError, maxRetries: config.retry?.retryAttempts }); }packages/tm-core/src/storage/api-storage.ts (1)
422-429: Fix exists() to honor IStorage contract and optional tagRespect tag existence; otherwise check overall stats.
- async exists(): Promise<boolean> { - try { - await this.initialize(); - return true; - } catch { - return false; - } - } + async exists(tag?: string): Promise<boolean> { + try { + await this.ensureInitialized(); + if (tag) { + const metadata = await this.loadMetadata(tag); + if (metadata) return (metadata.taskCount ?? 0) > 0; + const tasks = await this.loadTasks(tag); + return tasks.length > 0; + } + const stats = await this.getStats(); + return (stats.totalTasks ?? 0) > 0; + } catch { + return false; + } + }packages/tm-core/src/services/task-service.ts (8)
78-118: One-time init pattern across public methodsAfter adding await this.initialize() to getTaskList, other methods are covered via delegation. No further changes needed.
Also applies to: 132-159, 161-213
82-90: Initialize storage before any accessgetTaskList uses this.storage without guaranteeing initialize() has run. Add initialization to avoid null access and racey state.
async getTaskList(options: GetTaskListOptions = {}): Promise<TaskListResult> { + await this.initialize(); // Determine which tag to use const activeTag = this.configManager.getActiveTag(); const tag = options.tag || activeTag;
111-117: Return resolved storage type, not config (which may be 'auto')Use the value determined at initialization.
return { tasks, total: rawTasks.length, filtered: filteredEntities.length, tag: options.tag, // Only include tag if explicitly provided - storageType: this.configManager.getStorageConfig().type + storageType: this.resolvedStorageType };
135-142: Normalize ID comparison to stringPrevents mismatches if upstream provides numeric IDs.
- return result.tasks.find((t) => t.id === taskId) || null; + return result.tasks.find((t) => t.id?.toString() === taskId) || null;
218-254: Bug: completedIds computed from filtered set — breaks dependency resolutionYou're filtering to pending/in-progress, then searching for done in that subset (always empty). Build completedIds from the full list first.
async getNextTask(tag?: string): Promise<Task | null> { - const result = await this.getTaskList({ - tag, - filter: { - status: ['pending', 'in-progress'] - } - }); - - // Find tasks with no dependencies or all dependencies satisfied - const completedIds = new Set( - result.tasks.filter((t) => t.status === 'done').map((t) => t.id) - ); - - const availableTasks = result.tasks.filter((task) => { + await this.initialize(); + // Load all tasks to evaluate dependencies correctly + const all = await this.getTaskList({ tag, includeSubtasks: true }); + const completedIds = new Set( + all.tasks.filter((t) => t.status === 'done').map((t) => t.id?.toString()) + ); + const candidates = all.tasks.filter( + (t) => t.status === 'pending' || t.status === 'in-progress' + ); + const availableTasks = candidates.filter((task) => { if (task.status === 'done' || task.status === 'blocked') { return false; } if (!task.dependencies || task.dependencies.length === 0) { return true; } return task.dependencies.every((depId) => completedIds.has(depId.toString()) ); }); // Sort by priority availableTasks.sort((a, b) => { const priorityOrder = { critical: 0, high: 1, medium: 2, low: 3 }; const aPriority = priorityOrder[a.priority || 'medium']; const bPriority = priorityOrder[b.priority || 'medium']; return aPriority - bPriority; }); return availableTasks[0] || null; }
309-316: Guard .toLowerCase() calls against undefinedAvoids runtime errors if any field is missing.
- const inTitle = task.title.toLowerCase().includes(searchLower); - const inDescription = task.description - .toLowerCase() - .includes(searchLower); - const inDetails = task.details.toLowerCase().includes(searchLower); + const inTitle = (task.title ?? '').toLowerCase().includes(searchLower); + const inDescription = (task.description ?? '') + .toLowerCase() + .includes(searchLower); + const inDetails = (task.details ?? '').toLowerCase().includes(searchLower);
337-339: Expose the resolved backendAlign getStorageType() with the runtime backend chosen.
- getStorageType(): 'file' | 'api' { - return this.configManager.getStorageConfig().type; - } + getStorageType(): 'file' | 'api' { + return this.resolvedStorageType; + }
46-55: Type storage safely and avoid null-as-any; track resolved storage typeUse a nullable field and an explicit resolvedStorageType to avoid unsafe casts and to surface the actual backend.
- private configManager: ConfigManager; - private storage: IStorage; - private initialized = false; + private configManager: ConfigManager; + private storage: IStorage | null = null; + private resolvedStorageType: 'file' | 'api' = 'file'; + private initialized = false; @@ - // Storage will be created during initialization - this.storage = null as any; + // Storage will be created during initialization
| "scripts": { | ||
| "build": "tsup", | ||
| "dev": "tsup --watch", | ||
| "typecheck": "tsc --noEmit", | ||
| "lint": "biome check src", | ||
| "format": "biome format --write src", | ||
| "test": "vitest run", | ||
| "test:watch": "vitest" | ||
| }, |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Add standardized test scripts (coverage/CI).
Aligns with repo test workflow.
"scripts": {
"build": "tsup",
"dev": "tsup --watch",
"typecheck": "tsc --noEmit",
"lint": "biome check src",
"format": "biome format --write src",
"test": "vitest run",
- "test:watch": "vitest"
+ "test:watch": "vitest",
+ "test:coverage": "vitest run --coverage",
+ "test:ci": "vitest run --coverage"
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "scripts": { | |
| "build": "tsup", | |
| "dev": "tsup --watch", | |
| "typecheck": "tsc --noEmit", | |
| "lint": "biome check src", | |
| "format": "biome format --write src", | |
| "test": "vitest run", | |
| "test:watch": "vitest" | |
| }, | |
| In apps/cli/package.json, update the "scripts" section to: | |
| "scripts": { | |
| "build": "tsup", | |
| "dev": "tsup --watch", | |
| "typecheck": "tsc --noEmit", | |
| "lint": "biome check src", | |
| "format": "biome format --write src", | |
| "test": "vitest run", | |
| "test:watch": "vitest", | |
| "test:coverage": "vitest run --coverage", | |
| "test:ci": "vitest run --coverage" | |
| }, |
🤖 Prompt for AI Agents
In apps/cli/package.json around lines 16 to 24, the scripts section is missing
standardized test/CI scripts; add CI-friendly scripts such as "test:ci" that
runs vitest in a non-interactive mode (e.g., vitest run --reporter=dot), and
"test:coverage" that runs vitest with coverage enabled (e.g., vitest run
--coverage) and optionally a "coverage:check" to enforce thresholds; update the
scripts object to include these new entries alongside existing test/test:watch
so CI pipelines can run tests and collect coverage consistently.
| this.command('set') | ||
| .description('Set context directly') | ||
| .option('--org <id>', 'Organization ID') | ||
| .option('--org-name <name>', 'Organization name') | ||
| .option('--brief <id>', 'Brief ID') | ||
| .option('--brief-name <name>', 'Brief name') | ||
| .action(async (options) => { | ||
| await this.executeSet(options); | ||
| }); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Commander action typing for options
Also type the action options to match SetOptions.
- .action(async (options) => {
+ .action(async (options: SetOptions) => {
await this.executeSet(options);
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| this.command('set') | |
| .description('Set context directly') | |
| .option('--org <id>', 'Organization ID') | |
| .option('--org-name <name>', 'Organization name') | |
| .option('--brief <id>', 'Brief ID') | |
| .option('--brief-name <name>', 'Brief name') | |
| .action(async (options) => { | |
| await this.executeSet(options); | |
| }); | |
| } | |
| this.command('set') | |
| .description('Set context directly') | |
| .option('--org <id>', 'Organization ID') | |
| .option('--org-name <name>', 'Organization name') | |
| .option('--brief <id>', 'Brief ID') | |
| .option('--brief-name <name>', 'Brief name') | |
| .action(async (options: SetOptions) => { | |
| await this.executeSet(options); | |
| }); |
🤖 Prompt for AI Agents
In apps/cli/src/commands/context.command.ts around lines 95 to 104, the
Commander .action callback currently has an untyped options parameter; annotate
it to use the existing SetOptions type so the action parameter matches the
expected shape. Import or reference SetOptions at the top of the file if not
already imported, and update the .action signature to accept options: SetOptions
(and include the optional command parameter if your handler expects it), then
pass that typed options into this.executeSet to keep compile-time safety.
| success: true, | ||
| action: 'select-brief', | ||
| context: this.authManager.getContext() || undefined, | ||
| message: `Selected brief: ${selectedBrief.name}` | ||
| }; |
There was a problem hiding this comment.
Fix message: selectedBrief.name may be undefined
Use the computed briefName to avoid showing "undefined" in user output.
- return {
+ return {
success: true,
action: 'select-brief',
context: this.authManager.getContext() || undefined,
- message: `Selected brief: ${selectedBrief.name}`
+ message: `Selected brief: ${briefName}`
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| success: true, | |
| action: 'select-brief', | |
| context: this.authManager.getContext() || undefined, | |
| message: `Selected brief: ${selectedBrief.name}` | |
| }; | |
| return { | |
| success: true, | |
| action: 'select-brief', | |
| context: this.authManager.getContext() || undefined, | |
| message: `Selected brief: ${briefName}` | |
| }; |
🤖 Prompt for AI Agents
In apps/cli/src/commands/context.command.ts around lines 347 to 351, the code
constructs the user message using selectedBrief.name which can be undefined;
change the message to use the already computed briefName (or compute it if
missing) so it never prints "undefined" — update the object to set message:
`Selected brief: ${briefName}` (ensuring briefName exists or falls back to a
sensible default).
| static registerOn(program: Command): Command { | ||
| const contextCommand = new ContextCommand(); | ||
| program.addCommand(contextCommand); | ||
| return contextCommand; | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Return the concrete type from registerOn for chaining
Expose ContextCommand for fluent usage.
- static registerOn(program: Command): Command {
+ static registerOn(program: Command): ContextCommand {
const contextCommand = new ContextCommand();
program.addCommand(contextCommand);
return contextCommand;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| static registerOn(program: Command): Command { | |
| const contextCommand = new ContextCommand(); | |
| program.addCommand(contextCommand); | |
| return contextCommand; | |
| } | |
| static registerOn(program: Command): ContextCommand { | |
| const contextCommand = new ContextCommand(); | |
| program.addCommand(contextCommand); | |
| return contextCommand; | |
| } |
🤖 Prompt for AI Agents
In apps/cli/src/commands/context.command.ts around lines 556 to 560, the static
registerOn method currently returns the generic Command type which prevents
fluent usage of ContextCommand; change the method signature to return
ContextCommand (the concrete type) while still creating and adding the
ContextCommand instance to the program and returning that instance, and update
any call sites whose types expect Command to accept ContextCommand if needed.
| export { AuthManager } from './auth-manager.js'; | ||
| export { CredentialStore } from './credential-store.js'; | ||
| export { OAuthService } from './oauth-service.js'; | ||
| export { SupabaseSessionStorage } from './supabase-session-storage'; |
There was a problem hiding this comment.
Fix ESM path: add missing .js extension
All other re-exports use .js; this one will fail under NodeNext/ESM resolution.
-export { SupabaseSessionStorage } from './supabase-session-storage';
+export { SupabaseSessionStorage } from './supabase-session-storage.js';📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export { SupabaseSessionStorage } from './supabase-session-storage'; | |
| export { SupabaseSessionStorage } from './supabase-session-storage.js'; |
🤖 Prompt for AI Agents
In packages/tm-core/src/auth/index.ts around line 8, the re-export lacks the .js
extension which breaks NodeNext/ESM resolution; update the export path to
include the .js extension (i.e., change './supabase-session-storage' to
'./supabase-session-storage.js') so it matches the other ESM-style re-exports
and resolves correctly under ESM.
| "moduleResolution": "bundler", | ||
| "moduleDetection": "force", | ||
| "types": ["node"], | ||
| "resolveJsonModule": true, |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Confirm toolchain for "moduleResolution": "bundler".
If you run code via tsx/ts-node or Node directly (not only a bundler like tsup/esbuild), you’ll also need runtime alias resolution (e.g., tsconfig-paths/register or bundler alias config). Otherwise "@/..." will type-check but fail at runtime.
🤖 Prompt for AI Agents
In packages/tm-core/tsconfig.json around lines 26 to 29, moduleResolution is set
to "bundler" which only provides compile-time alias handling; if this package is
run with tsx/ts-node or plain Node you must either switch moduleResolution to
"node" (or "node16"/"nodenext") to match runtime resolution, or ensure runtime
alias resolution is installed/configured (e.g., add tsconfig-paths/register to
your node startup for ts-node/tsx or mirror aliases in your bundler/esbuild/tsup
config). Update the tsconfig or add runtime alias setup to ensure "@/..."
imports work at runtime.
| "@/types": ["./src/types"], | ||
| "@/providers": ["./src/providers"], | ||
| "@/storage": ["./src/storage"], | ||
| "@/parser": ["./src/parser"], | ||
| "@/utils": ["./src/utils"], | ||
| "@/errors": ["./src/errors"] |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Trim redundant path aliases.
"@/*" already covers these subpaths; keeping both increases maintenance surface without benefits.
"paths": {
"@/*": ["./src/*"],
- "@/types": ["./src/types"],
- "@/providers": ["./src/providers"],
- "@/storage": ["./src/storage"],
- "@/parser": ["./src/parser"],
- "@/utils": ["./src/utils"],
- "@/errors": ["./src/errors"]
+ "@/types": ["./src/types"] // keep only if you want explicit affordances
}If you prefer explicit affordances, keep them; otherwise rely on "@/*".
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "@/types": ["./src/types"], | |
| "@/providers": ["./src/providers"], | |
| "@/storage": ["./src/storage"], | |
| "@/parser": ["./src/parser"], | |
| "@/utils": ["./src/utils"], | |
| "@/errors": ["./src/errors"] | |
| "paths": { | |
| "@/*": ["./src/*"], | |
| "@/types": ["./src/types"] // keep only if you want explicit affordances | |
| } |
🤖 Prompt for AI Agents
In packages/tm-core/tsconfig.json around lines 33 to 38, there are explicit path
aliases for "@/types", "@/providers", "@/storage", "@/parser", "@/utils", and
"@/errors" which are redundant because the "@/ *" alias already covers these
subpaths; remove these redundant specific entries from the "paths" section so
only the "@/ *" (or your chosen umbrella alias) remains, or if you want to keep
explicit entries for clarity, add a brief comment explaining why; ensure the
final JSON remains valid (commas, quoting) after removing the redundant lines.
| "include": ["src/**/*"], | ||
| "exclude": ["node_modules", "dist", "tests", "**/*.test.ts", "**/*.spec.ts"] |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Tests excluded — ensure a test tsconfig exists.
Excluding tests is fine if apps/tests use a separate tsconfig (e.g., tsconfig.test.json). Verify this to avoid losing type-checking in tests.
I can draft a minimal tsconfig.test.json that extends this config if helpful.
| 'errors/index': 'src/errors/index.ts' | ||
| }, | ||
| format: ['cjs', 'esm'], | ||
| dts: true, |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Option: turn off dts to match your DX preference (types -> src).
Per your learning preference for internal pkgs, consider not emitting .d.ts to speed builds and avoid churn. If external consumers appear later, re-enable.
Apply if desired:
- dts: true,
+ dts: false,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| dts: true, | |
| dts: false, |
🤖 Prompt for AI Agents
packages/tm-core/tsup.config.ts around line 14: the config currently enables
generation of .d.ts files (dts: true); to follow your DX preference and speed
builds, set dts to false (or remove the option) so TypeScript declaration files
are not emitted for this internal package; update the tsup config accordingly
and run a quick build to verify no .d.ts files are produced.
| // NEW: Register the new list command from @tm/cli | ||
| // This command handles all its own configuration and logic | ||
| ListTasksCommand.registerOn(programInstance); | ||
|
|
||
| // Only pass complexityReportPath if user provided a custom path | ||
| if (options.report && options.report !== COMPLEXITY_REPORT_FILE) { | ||
| initOptions.complexityReportPath = options.report; | ||
| } | ||
|
|
||
| const taskMaster = initTaskMaster(initOptions); | ||
| // Register the auth command from @tm/cli | ||
| // Handles authentication with tryhamster.com | ||
| AuthCommand.registerOn(programInstance); | ||
|
|
||
| const statusFilter = options.status; | ||
| const withSubtasks = options.withSubtasks || false; | ||
| const compact = options.compact || false; | ||
| const tag = taskMaster.getCurrentTag(); | ||
| // Show current tag context | ||
| displayCurrentTagIndicator(tag); | ||
|
|
||
| if (!compact) { | ||
| console.log( | ||
| chalk.blue(`Listing tasks from: ${taskMaster.getTasksPath()}`) | ||
| ); | ||
| if (statusFilter) { | ||
| console.log(chalk.blue(`Filtering by status: ${statusFilter}`)); | ||
| } | ||
| if (withSubtasks) { | ||
| console.log(chalk.blue('Including subtasks in listing')); | ||
| } | ||
| } | ||
|
|
||
| await listTasks( | ||
| taskMaster.getTasksPath(), | ||
| statusFilter, | ||
| taskMaster.getComplexityReportPath(), | ||
| withSubtasks, | ||
| compact ? 'compact' : 'text', | ||
| { projectRoot: taskMaster.getProjectRoot(), tag } | ||
| ); | ||
| }); | ||
| // Register the context command from @tm/cli | ||
| // Manages workspace context (org/brief selection) | ||
| ContextCommand.registerOn(programInstance); | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Conditionally register external commands only when available
Wrap registrations so the CLI still works when @tm/cli isn’t installed.
-// NEW: Register the new list command from @tm/cli
-// This command handles all its own configuration and logic
-ListTasksCommand.registerOn(programInstance);
-
-// Register the auth command from @tm/cli
-// Handles authentication with tryhamster.com
-AuthCommand.registerOn(programInstance);
-
-// Register the context command from @tm/cli
-// Manages workspace context (org/brief selection)
-ContextCommand.registerOn(programInstance);
+// Register optional commands from @tm/cli if present
+if (ListTasksCommand && AuthCommand && ContextCommand) {
+ ListTasksCommand.registerOn(programInstance);
+ AuthCommand.registerOn(programInstance);
+ ContextCommand.registerOn(programInstance);
+} else {
+ log('debug', "Skipping '@tm/cli' command registrations (module not available).");
+}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // NEW: Register the new list command from @tm/cli | |
| // This command handles all its own configuration and logic | |
| ListTasksCommand.registerOn(programInstance); | |
| // Only pass complexityReportPath if user provided a custom path | |
| if (options.report && options.report !== COMPLEXITY_REPORT_FILE) { | |
| initOptions.complexityReportPath = options.report; | |
| } | |
| const taskMaster = initTaskMaster(initOptions); | |
| // Register the auth command from @tm/cli | |
| // Handles authentication with tryhamster.com | |
| AuthCommand.registerOn(programInstance); | |
| const statusFilter = options.status; | |
| const withSubtasks = options.withSubtasks || false; | |
| const compact = options.compact || false; | |
| const tag = taskMaster.getCurrentTag(); | |
| // Show current tag context | |
| displayCurrentTagIndicator(tag); | |
| if (!compact) { | |
| console.log( | |
| chalk.blue(`Listing tasks from: ${taskMaster.getTasksPath()}`) | |
| ); | |
| if (statusFilter) { | |
| console.log(chalk.blue(`Filtering by status: ${statusFilter}`)); | |
| } | |
| if (withSubtasks) { | |
| console.log(chalk.blue('Including subtasks in listing')); | |
| } | |
| } | |
| await listTasks( | |
| taskMaster.getTasksPath(), | |
| statusFilter, | |
| taskMaster.getComplexityReportPath(), | |
| withSubtasks, | |
| compact ? 'compact' : 'text', | |
| { projectRoot: taskMaster.getProjectRoot(), tag } | |
| ); | |
| }); | |
| // Register the context command from @tm/cli | |
| // Manages workspace context (org/brief selection) | |
| ContextCommand.registerOn(programInstance); | |
| // Register optional commands from '@tm/cli' if present | |
| if (ListTasksCommand && AuthCommand && ContextCommand) { | |
| ListTasksCommand.registerOn(programInstance); | |
| AuthCommand.registerOn(programInstance); | |
| ContextCommand.registerOn(programInstance); | |
| } else { | |
| log('debug', "Skipping '@tm/cli' command registrations (module not available)."); | |
| } |
🤖 Prompt for AI Agents
In scripts/modules/commands.js around lines 1740 to 1751, the code
unconditionally calls registerOn for commands from @tm/cli which will break the
CLI if that package isn't installed; wrap each external command registration in
a safe existence check (e.g., try/catch require/import or conditional
require.resolve) and only call Command.registerOn(programInstance) when the
module and the Command export are present, otherwise skip registration silently
(or log a debug message) so the CLI continues to work when @tm/cli is not
available.
| async logout(): Promise<void> { | ||
| try { | ||
| // First try to sign out from Supabase to revoke tokens | ||
| await this.supabaseClient.signOut(); | ||
| } catch (error) { | ||
| // Log but don't throw - we still want to clear local credentials | ||
| getLogger('AuthManager').warn('Failed to sign out from Supabase:', error); | ||
| } | ||
|
|
||
| // Always clear local credentials (removes auth.json file) | ||
| this.credentialStore.clearCredentials(); | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Reset OrganizationService on logout
Ensure a fresh instance post-logout; avoids stale session reuse.
async logout(): Promise<void> {
try {
// First try to sign out from Supabase to revoke tokens
await this.supabaseClient.signOut();
} catch (error) {
// Log but don't throw - we still want to clear local credentials
getLogger('AuthManager').warn('Failed to sign out from Supabase:', error);
}
// Always clear local credentials (removes auth.json file)
this.credentialStore.clearCredentials();
+ this.organizationService = undefined;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async logout(): Promise<void> { | |
| try { | |
| // First try to sign out from Supabase to revoke tokens | |
| await this.supabaseClient.signOut(); | |
| } catch (error) { | |
| // Log but don't throw - we still want to clear local credentials | |
| getLogger('AuthManager').warn('Failed to sign out from Supabase:', error); | |
| } | |
| // Always clear local credentials (removes auth.json file) | |
| this.credentialStore.clearCredentials(); | |
| } | |
| async logout(): Promise<void> { | |
| try { | |
| // First try to sign out from Supabase to revoke tokens | |
| await this.supabaseClient.signOut(); | |
| } catch (error) { | |
| // Log but don't throw - we still want to clear local credentials | |
| getLogger('AuthManager').warn('Failed to sign out from Supabase:', error); | |
| } | |
| // Always clear local credentials (removes auth.json file) | |
| this.credentialStore.clearCredentials(); | |
| this.organizationService = undefined; | |
| } |
| async updateContext(context: Partial<UserContext>): Promise<void> { | ||
| const credentials = this.getCredentials(); | ||
| if (!credentials) { | ||
| throw new AuthenticationError('Not authenticated', 'NOT_AUTHENTICATED'); | ||
| } | ||
|
|
||
| // Merge with existing context | ||
| const existingContext = credentials.selectedContext || {}; | ||
| const newContext: UserContext = { | ||
| ...existingContext, | ||
| ...context, | ||
| updatedAt: new Date().toISOString() | ||
| }; | ||
|
|
||
| // Save updated credentials with new context | ||
| const updatedCredentials: AuthCredentials = { | ||
| ...credentials, | ||
| selectedContext: newContext | ||
| }; | ||
|
|
||
| this.credentialStore.saveCredentials(updatedCredentials); | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Clear brief selection when org changes to avoid stale context
If orgId changes and briefId isn’t explicitly provided, drop existing brief fields.
async updateContext(context: Partial<UserContext>): Promise<void> {
@@
- const newContext: UserContext = {
- ...existingContext,
- ...context,
- updatedAt: new Date().toISOString()
- };
+ const orgChanged =
+ context.orgId !== undefined && context.orgId !== existingContext.orgId;
+ const newContext: UserContext = {
+ ...(orgChanged
+ ? { orgId: context.orgId, orgName: context.orgName, updatedAt: '' } // reset then merge below
+ : existingContext),
+ ...context,
+ // Clear brief linkage if org changed and brief not explicitly set
+ ...(orgChanged && context.briefId === undefined
+ ? { briefId: undefined, briefName: undefined }
+ : {}),
+ updatedAt: new Date().toISOString()
+ };📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async updateContext(context: Partial<UserContext>): Promise<void> { | |
| const credentials = this.getCredentials(); | |
| if (!credentials) { | |
| throw new AuthenticationError('Not authenticated', 'NOT_AUTHENTICATED'); | |
| } | |
| // Merge with existing context | |
| const existingContext = credentials.selectedContext || {}; | |
| const newContext: UserContext = { | |
| ...existingContext, | |
| ...context, | |
| updatedAt: new Date().toISOString() | |
| }; | |
| // Save updated credentials with new context | |
| const updatedCredentials: AuthCredentials = { | |
| ...credentials, | |
| selectedContext: newContext | |
| }; | |
| this.credentialStore.saveCredentials(updatedCredentials); | |
| } | |
| async updateContext(context: Partial<UserContext>): Promise<void> { | |
| const credentials = this.getCredentials(); | |
| if (!credentials) { | |
| throw new AuthenticationError('Not authenticated', 'NOT_AUTHENTICATED'); | |
| } | |
| // Merge with existing context | |
| const existingContext = credentials.selectedContext || {}; | |
| const orgChanged = | |
| context.orgId !== undefined && context.orgId !== existingContext.orgId; | |
| const newContext: UserContext = { | |
| - ...existingContext, | |
| - ...context, | |
| ...(orgChanged | |
| ? { orgId: context.orgId, orgName: context.orgName, updatedAt: '' } | |
| : existingContext), | |
| ...context, | |
| // Clear brief linkage if org changed and brief not explicitly set | |
| ...(orgChanged && context.briefId === undefined | |
| ? { briefId: undefined, briefName: undefined } | |
| : {}), | |
| updatedAt: new Date().toISOString() | |
| }; | |
| // Save updated credentials with new context | |
| const updatedCredentials: AuthCredentials = { | |
| ...credentials, | |
| selectedContext: newContext | |
| }; | |
| this.credentialStore.saveCredentials(updatedCredentials); | |
| } |
🤖 Prompt for AI Agents
In packages/tm-core/src/auth/auth-manager.ts around lines 181 to 202, the
updateContext merge can leave a stale brief selection when orgId changes but
briefId isn't provided; detect if the incoming context contains an orgId that
differs from existingContext.orgId and the incoming context does not include
briefId, and in that case remove/clear brief-related fields (e.g., briefId,
briefName or any brief* properties) from the newContext before saving so the
brief selection is reset when switching organizations.
| private async getOrganizationService(): Promise<OrganizationService> { | ||
| if (!this.organizationService) { | ||
| // First check if we have credentials with a token | ||
| const credentials = this.getCredentials(); | ||
| if (!credentials || !credentials.token) { | ||
| throw new AuthenticationError('Not authenticated', 'NOT_AUTHENTICATED'); | ||
| } | ||
|
|
||
| // Initialize session if needed (this will load from our storage adapter) | ||
| await this.supabaseClient.initialize(); | ||
|
|
||
| // Use the SupabaseAuthClient which now has the session | ||
| const supabaseClient = this.supabaseClient.getClient(); | ||
| this.organizationService = new OrganizationService(supabaseClient as any); | ||
| } | ||
| return this.organizationService; | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Avoid any: return a typed Supabase client
Prefer a typed getter on SupabaseAuthClient to drop casts.
// In SupabaseAuthClient:
getClientTyped<TDb = Database>(): SupabaseClient<TDb> {
return this.getClient() as unknown as SupabaseClient<TDb>;
}
// Here:
const supabaseClient = this.supabaseClient.getClientTyped<Database>();
this.organizationService = new OrganizationService(supabaseClient);🤖 Prompt for AI Agents
In packages/tm-core/src/auth/auth-manager.ts around lines 222 to 238, the code
uses an any cast when obtaining the Supabase client for OrganizationService;
replace the cast by calling a typed getter on SupabaseAuthClient (add
getClientTyped<TDb>() to SupabaseAuthClient if it doesn’t exist) and use
supabaseAuthClient.getClientTyped<Database>() here, then pass that
strongly-typed client into new OrganizationService(...) instead of casting to
any.
| // Map to our Organization interface | ||
| return data.map((org) => ({ | ||
| id: org.id ?? '', | ||
| name: org.name ?? '', | ||
| slug: org.slug ?? org.id ?? '' // Use ID as fallback if slug is null | ||
| })); | ||
| } catch (error) { |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Don’t emit empty-string IDs; filter nulls before mapping
Returning Organization with id: '' harms data integrity. Drop null-id rows or throw.
- // Map to our Organization interface
- return data.map((org) => ({
- id: org.id ?? '',
- name: org.name ?? '',
- slug: org.slug ?? org.id ?? '' // Use ID as fallback if slug is null
- }));
+ // Filter invalid rows and map to our Organization interface
+ return data
+ .filter((org): org is { id: string; name: string | null; slug: string | null } => !!org.id)
+ .map((org) => ({
+ id: org.id,
+ name: org.name ?? '',
+ slug: org.slug ?? org.id // Use ID as fallback if slug is null
+ }));📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Map to our Organization interface | |
| return data.map((org) => ({ | |
| id: org.id ?? '', | |
| name: org.name ?? '', | |
| slug: org.slug ?? org.id ?? '' // Use ID as fallback if slug is null | |
| })); | |
| } catch (error) { | |
| // Filter invalid rows and map to our Organization interface | |
| return data | |
| .filter((org): org is { id: string; name: string | null; slug: string | null } => !!org.id) | |
| .map((org) => ({ | |
| id: org.id, | |
| name: org.name ?? '', | |
| slug: org.slug ?? org.id // Use ID as fallback if slug is null | |
| })); | |
| } catch (error) { |
🤖 Prompt for AI Agents
In packages/tm-core/src/services/organization.service.ts around lines 90 to 96,
the code maps organizations and substitutes an empty string for missing ids
which corrupts data; instead, ensure you validate and handle null/undefined ids
before mapping by checking the fetched data for any orgs with a missing id and
throwing a clear error (or alternatively filter them out) rather than returning
id: ''; then map only validated items and remove the fallback id: '' assignment
(use org.id as-is).
| return data.map((brief: any) => ({ | ||
| id: brief.id, | ||
| accountId: brief.account_id, | ||
| documentId: brief.document_id, | ||
| status: brief.status, | ||
| createdAt: brief.created_at, | ||
| updatedAt: brief.updated_at | ||
| })); | ||
| } catch (error) { |
There was a problem hiding this comment.
🧹 Nitpick (assertive)
Avoid any: strongly type Supabase results
Use .returns<...>() to preserve types from Database and remove any casts.
// Example:
type BriefRow = Pick<
Database['public']['Tables']['brief']['Row'],
'id' | 'account_id' | 'document_id' | 'status' | 'created_at' | 'updated_at'
>;
const { data, error } = await this.supabaseClient
.from('brief')
.select('id,account_id,document_id,status,created_at,updated_at')
.eq('account_id', orgId)
.returns<BriefRow[]>();🤖 Prompt for AI Agents
In packages/tm-core/src/services/organization.service.ts around lines 193 to
201, the code maps Supabase result using untyped "any" — replace that with a
strongly typed Row type and use Supabase's .returns<...>() to preserve DB types
instead of casting; declare a BriefRow type (using Pick on
Database['public']['Tables']['brief']['Row'] for the needed fields), change the
.select call to request only those columns, append .returns<BriefRow[]>(), and
then use the typed data variable directly in the mapping so you no longer use
any casts.
| try { | ||
| await this.retryOperation(() => | ||
| this.repository.deleteTag(this.projectId, tag) | ||
| ); | ||
|
|
||
| this.tagsCache.delete(tag); | ||
| } catch (error) { | ||
| throw new TaskMasterError( |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Guard deleteTag
- await this.retryOperation(() =>
- this.repository.deleteTag(this.projectId, tag)
- );
+ if (this.supportsTagOps) {
+ await this.retryOperation(() =>
+ (this.repository as any).deleteTag(this.projectId, tag)
+ );
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try { | |
| await this.retryOperation(() => | |
| this.repository.deleteTag(this.projectId, tag) | |
| ); | |
| this.tagsCache.delete(tag); | |
| } catch (error) { | |
| throw new TaskMasterError( | |
| try { | |
| - await this.retryOperation(() => | |
| - this.repository.deleteTag(this.projectId, tag) | |
| if (this.supportsTagOps) { | |
| await this.retryOperation(() => | |
| (this.repository as any).deleteTag(this.projectId, tag) | |
| ); | |
| } | |
| this.tagsCache.delete(tag); | |
| } catch (error) { | |
| throw new TaskMasterError( |
🤖 Prompt for AI Agents
In packages/tm-core/src/storage/api-storage.ts around lines 509 to 516, add a
guard before attempting deletion: validate the incoming tag (non-empty string)
and check that the tag exists (use this.tagsCache.has(tag) or call
repository.existsTag/project-specific check) and only then call
repository.deleteTag; if the tag is invalid or not found, throw a clear
TaskMasterError indicating "tag not found" or "invalid tag" instead of
attempting delete—this prevents unnecessary repository calls and yields a proper
error for callers.
| try { | ||
| const tagData = this.tagsCache.get(oldTag); | ||
| if (!tagData) { | ||
| throw new Error(`Tag ${oldTag} not found`); | ||
| } | ||
|
|
||
| // Create new tag with same data | ||
| const newTagData = { ...tagData, name: newTag }; | ||
| await this.repository.createTag(this.projectId, newTagData); | ||
|
|
||
| // Delete old tag | ||
| await this.repository.deleteTag(this.projectId, oldTag); | ||
|
|
||
| // Update cache | ||
| this.tagsCache.delete(oldTag); | ||
| this.tagsCache.set(newTag, newTagData); | ||
| } catch (error) { |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Guard renameTag; update cache when repo lacks support
- await this.repository.createTag(this.projectId, newTagData);
+ if (this.supportsTagOps) {
+ await (this.repository as any).createTag(this.projectId, newTagData);
+ }
@@
- await this.repository.deleteTag(this.projectId, oldTag);
+ if (this.supportsTagOps) {
+ await (this.repository as any).deleteTag(this.projectId, oldTag);
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try { | |
| const tagData = this.tagsCache.get(oldTag); | |
| if (!tagData) { | |
| throw new Error(`Tag ${oldTag} not found`); | |
| } | |
| // Create new tag with same data | |
| const newTagData = { ...tagData, name: newTag }; | |
| await this.repository.createTag(this.projectId, newTagData); | |
| // Delete old tag | |
| await this.repository.deleteTag(this.projectId, oldTag); | |
| // Update cache | |
| this.tagsCache.delete(oldTag); | |
| this.tagsCache.set(newTag, newTagData); | |
| } catch (error) { | |
| try { | |
| const tagData = this.tagsCache.get(oldTag); | |
| if (!tagData) { | |
| throw new Error(`Tag ${oldTag} not found`); | |
| } | |
| // Create new tag with same data | |
| const newTagData = { ...tagData, name: newTag }; | |
| if (this.supportsTagOps) { | |
| await (this.repository as any).createTag(this.projectId, newTagData); | |
| } | |
| // Delete old tag | |
| if (this.supportsTagOps) { | |
| await (this.repository as any).deleteTag(this.projectId, oldTag); | |
| } | |
| // Update cache | |
| this.tagsCache.delete(oldTag); | |
| this.tagsCache.set(newTag, newTagData); | |
| } catch (error) { |
🤖 Prompt for AI Agents
In packages/tm-core/src/storage/api-storage.ts around lines 531 to 547, the
renameTag flow assumes the repository supports creating and deleting tags; add
guards so if repository.createTag or repository.deleteTag (or a repo-level
rename) is missing, we skip the remote operations and still update the in-memory
tagsCache to reflect the rename. Implement feature-detection: if
createTag/deleteTag are functions, call them and handle their errors (rethrow or
log) before mutating cache; otherwise, just mutate the cache (delete oldTag, set
newTag) and return success. Ensure the code does not attempt undefined repo
calls and that the cache update happens in the non-supported-repo branch.
| try { | ||
| const sourceData = this.tagsCache.get(sourceTag); | ||
| if (!sourceData) { | ||
| throw new Error(`Source tag ${sourceTag} not found`); | ||
| } | ||
|
|
||
| // Create new tag with copied data | ||
| const targetData = { ...sourceData, name: targetTag }; | ||
| await this.repository.createTag(this.projectId, targetData); | ||
|
|
||
| // Update cache | ||
| this.tagsCache.set(targetTag, targetData); | ||
| } catch (error) { |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Guard copyTag
- await this.repository.createTag(this.projectId, targetData);
+ if (this.supportsTagOps) {
+ await (this.repository as any).createTag(this.projectId, targetData);
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try { | |
| const sourceData = this.tagsCache.get(sourceTag); | |
| if (!sourceData) { | |
| throw new Error(`Source tag ${sourceTag} not found`); | |
| } | |
| // Create new tag with copied data | |
| const targetData = { ...sourceData, name: targetTag }; | |
| await this.repository.createTag(this.projectId, targetData); | |
| // Update cache | |
| this.tagsCache.set(targetTag, targetData); | |
| } catch (error) { | |
| try { | |
| const sourceData = this.tagsCache.get(sourceTag); | |
| if (!sourceData) { | |
| throw new Error(`Source tag ${sourceTag} not found`); | |
| } | |
| // Create new tag with copied data | |
| const targetData = { ...sourceData, name: targetTag }; | |
| if (this.supportsTagOps) { | |
| await (this.repository as any).createTag(this.projectId, targetData); | |
| } | |
| // Update cache | |
| this.tagsCache.set(targetTag, targetData); | |
| } catch (error) { |
🤖 Prompt for AI Agents
In packages/tm-core/src/storage/api-storage.ts around lines 563 to 575, the
copyTag block lacks guards and may overwrite an existing target or corrupt cache
on failure; first check that sourceTag exists (as already done) and also verify
that targetTag does NOT already exist in tagsCache (and optionally in the
repository) and throw a clear error if it does, then call repository.createTag
and only update tagsCache after a successful create; ensure errors from
repository.createTag propagate or are wrapped with context so the cache is not
updated on failure.
| try { | ||
| const tasks = await this.repository.getTasks(this.projectId); | ||
| const tags = await this.repository.getTags(this.projectId); | ||
|
|
||
| const tagStats = tags.map((tag) => ({ | ||
| tag: tag.name, | ||
| taskCount: tag.tasks.length, | ||
| lastModified: new Date().toISOString() // TODO: Get actual last modified from tag data | ||
| })); | ||
|
|
||
| return { | ||
| totalTasks: tasks.length, | ||
| totalTags: tags.length, | ||
| storageSize: 0, // Not applicable for API storage | ||
| lastModified: new Date().toISOString(), | ||
| tagStats | ||
| }; |
There was a problem hiding this comment.
🛠️ Refactor suggestion
getStats: avoid calling unimplemented getTags; derive from cache when needed
- const tasks = await this.repository.getTasks(this.projectId);
- const tags = await this.repository.getTags(this.projectId);
+ const tasks = await this.repository.getTasks(this.projectId);
+ const tags = this.supportsTagOps
+ ? await (this.repository as any).getTags(this.projectId)
+ : Array.from(this.tagsCache.values());📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try { | |
| const tasks = await this.repository.getTasks(this.projectId); | |
| const tags = await this.repository.getTags(this.projectId); | |
| const tagStats = tags.map((tag) => ({ | |
| tag: tag.name, | |
| taskCount: tag.tasks.length, | |
| lastModified: new Date().toISOString() // TODO: Get actual last modified from tag data | |
| })); | |
| return { | |
| totalTasks: tasks.length, | |
| totalTags: tags.length, | |
| storageSize: 0, // Not applicable for API storage | |
| lastModified: new Date().toISOString(), | |
| tagStats | |
| }; | |
| try { | |
| const tasks = await this.repository.getTasks(this.projectId); | |
| const tags = this.supportsTagOps | |
| ? await (this.repository as any).getTags(this.projectId) | |
| : Array.from(this.tagsCache.values()); | |
| const tagStats = tags.map((tag) => ({ | |
| tag: tag.name, | |
| taskCount: tag.tasks.length, | |
| lastModified: new Date().toISOString() // TODO: Get actual last modified from tag data | |
| })); | |
| return { | |
| totalTasks: tasks.length, | |
| totalTags: tags.length, | |
| storageSize: 0, // Not applicable for API storage | |
| lastModified: new Date().toISOString(), | |
| tagStats | |
| }; |
🤖 Prompt for AI Agents
packages/tm-core/src/storage/api-storage.ts around lines 591 to 607: the code
currently calls the unimplemented this.repository.getTags; replace that call by
deriving tags from the in-memory cache on the API storage instance instead (e.g.
use this.tags, this.tagCache, or this.cache.getTags() as available) and fall
back to an empty array if no cache exists; compute tagStats and totalTags from
that cached tags array, and set lastModified by using each tag's
updated/modified timestamp when present (fall back to new Date().toISOString()),
and avoid calling repository.getTags so the method works without the
unimplemented API.
| logger.info('☁️ Using API storage'); | ||
| return StorageFactory.createApiStorage(config); | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Pass projectPath to createApiStorage
Update callsite after signature change.
- return StorageFactory.createApiStorage(config);
+ return StorageFactory.createApiStorage(config, projectPath);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| logger.info('☁️ Using API storage'); | |
| return StorageFactory.createApiStorage(config); | |
| logger.info('☁️ Using API storage'); | |
| return StorageFactory.createApiStorage(config, projectPath); |
🤖 Prompt for AI Agents
In packages/tm-core/src/storage/storage-factory.ts around lines 91 to 93, the
call to StorageFactory.createApiStorage was not updated after the signature
change — pass the current projectPath into the call. Replace the existing return
StorageFactory.createApiStorage(config) with a call that includes projectPath as
the second argument (e.g., StorageFactory.createApiStorage(config,
projectPath)), ensuring projectPath is in scope or pulled from the surrounding
context before the call.
What type of PR is this?
Description
Related Issues
How to Test This
# Example commands or stepsExpected result:
Contributor Checklist
npm run changesetnpm testnpm run format-check(ornpm run formatto fix)Changelog Entry
For Maintainers
Summary by CodeRabbit
New Features
Documentation
Tests
Changes
Chores