Skip to content

Add Claude Code CLI Integration for Cost-Effective AI Operations#601

Closed
neno-is-ooo wants to merge 5 commits intoeyaltoledano:nextfrom
neno-is-ooo:feature/new-development
Closed

Add Claude Code CLI Integration for Cost-Effective AI Operations#601
neno-is-ooo wants to merge 5 commits intoeyaltoledano:nextfrom
neno-is-ooo:feature/new-development

Conversation

@neno-is-ooo
Copy link
Contributor

@neno-is-ooo neno-is-ooo commented May 26, 2025

Add Claude Code CLI Integration for Cost-Effective AI Operations

Summary

This PR introduces support for Claude Code CLI as an AI provider in Taskmaster, enabling users with Claude Code subscriptions (~$17/month) to use all AI-powered features without per-token API costs.

Motivation

Many developers have Claude Code subscriptions for their IDE integration. This feature allows them to leverage their existing subscription for Taskmaster's AI features instead of paying per-API-call costs, making the tool more accessible for individual developers and small teams.

Changes

Core Implementation

  • Added new AI provider module src/ai-providers/claude-code.js that interfaces with Claude Code CLI
  • Integrated the provider into ai-services-unified.js service registry
  • Updated configuration management to handle Claude Code's authentication model (no API key required)

Configuration & Setup

  • Added Claude Code to supported-models.json with zero-cost pricing
  • Enhanced interactive setup (--setup) to include Claude Code option
  • Updated config validation to recognize claude-code as a valid provider

Testing

  • Added comprehensive unit tests for the Claude Code provider
  • Created integration tests for configuration and service integration
  • Manually tested all AI-dependent commands (parse-prd, update-task, expand, etc.)

Documentation

  • Created user documentation in docs/claude-code-integration.md
  • Added detailed contributor documentation in docs/contributor-docs/claude-code-integration.md
  • Updated README to list Claude Code as a supported provider

Technical Details

The implementation uses Node.js spawn instead of exec for subprocess communication, which:

  • Avoids shell escaping issues with complex prompts
  • Provides better control over stdin/stdout streams
  • Handles large responses more reliably

Testing Instructions

  1. Install Claude Code CLI (requires active subscription)
  2. Configure Taskmaster to use Claude Code:
    task-master --setup
    # Select "Claude Code CLI (Subscription)"
  3. Test any AI command:
    task-master parse-prd --file prd.txt
    task-master expand --id=1
    task-master update-task --id=1 --prompt="Add error handling"

Breaking Changes

None. This is an additive feature that doesn't affect existing functionality.

Future Enhancements

  • Add support for model selection (opus/sonnet/haiku) via --model flag
  • Implement true streaming support
  • Leverage Claude Code's MCP capabilities for enhanced features

Checklist

  • Code follows project style guidelines
  • Tests pass locally
  • Documentation updated
  • No breaking changes
  • Manually tested all affected commands

- Switch from exec to spawn for better stdin handling and avoiding shell escaping issues
- Add comprehensive unit tests for the Claude Code provider
- Add integration tests for configuration and service integration
- Create detailed contributor documentation explaining the implementation
- Fix config validation to properly recognize claude-code as a valid provider
- Improve error handling and timeout management

The spawn approach resolves issues with complex prompts containing special characters
and provides more reliable command execution. Tests ensure the integration works
correctly across all edge cases.
@changeset-bot
Copy link

changeset-bot bot commented May 26, 2025

⚠️ No Changeset found

Latest commit: 156ed2f

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

- Apply Prettier formatting to all files
- Rewrite claude-code tests to properly mock child_process
- Fix test async handling and error messages
- Update documentation formatting
- Skip integration test when Claude Code CLI is not installed
- Handle both missing CLI and auth errors gracefully
- Integration test now properly skips in CI environments
@neno-is-ooo neno-is-ooo changed the base branch from main to next May 26, 2025 11:35
@eyaltoledano
Copy link
Owner

awesome, looking forward to trying this out

@janss-en
Copy link

this is exactly what im looking for! can't wait to try out

@christabone
Copy link

Thank you so much for this PR, really excited to try it out.

@mpisat
Copy link

mpisat commented Jun 1, 2025

How can we test this?

@neno-is-ooo
Copy link
Contributor Author

neno-is-ooo commented Jun 1, 2025

How can we test this?

give the link to your favourite agent and ask it to clone the repo locally at this commit 156ed2f and ask your agent to create a wrapper for commands inside taskmaster/scripts/

something that looks like this:

#!/bin/bash
# Enhanced taskmaster wrapper for Paper1 project with show-llm command

# Get the directory where this script is located
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"

# Check if the first argument is "show-llm"
if [ "$1" = "show-llm" ]; then
    # Custom implementation for show-llm command
    TASK_ID="$2"
    
    if [ -z "$TASK_ID" ]; then
        echo "Error: Please provide a task ID"
        echo "Usage: $0 show-llm <task-id>"
        exit 1
    fi
    
    # Use Python to generate LLM-consumable output
    python3 - << 'EOF' "$TASK_ID" "$SCRIPT_DIR"
import json
import sys
import os

task_id = sys.argv[1]
script_dir = sys.argv[2]

# Load tasks.json
tasks_path = os.path.join(script_dir, "tasks", "tasks.json")

try:
    with open(tasks_path, 'r') as f:
        data = json.load(f)
except FileNotFoundError:
    print(f"Error: tasks.json not found at {tasks_path}")
    sys.exit(1)

def find_task_or_subtask(tasks, target_id):
    """Find a task or subtask by ID (supports both "5" and "5.1" format)"""
    # Check if it's a subtask ID (contains a dot)
    if '.' in str(target_id):
        parent_id, subtask_id = str(target_id).split('.', 1)
        parent_id = int(parent_id)
        subtask_id = int(subtask_id)
        
        # Find parent task
        for task in tasks:
            if task['id'] == parent_id:
                if 'subtasks' in task:
                    for subtask in task['subtasks']:
                        if subtask['id'] == subtask_id:
                            # Add parent task info to subtask
                            subtask['parent_task'] = {
                                'id': task['id'],
                                'title': task['title']
                            }
                            subtask['is_subtask'] = True
                            return subtask
        return None
    else:
        # Regular task
        target_id = int(target_id)
        for task in tasks:
            if task['id'] == target_id:
                task['is_subtask'] = False
                return task
        return None

def format_dependencies(deps, tasks_list):
    """Format dependencies with their titles"""
    if not deps:
        return "None"
    
    formatted = []
    for dep_id in deps:
        dep_task = find_task_or_subtask(tasks_list, dep_id)
        if dep_task:
            formatted.append(f"- {dep_id}: {dep_task.get('title', 'Unknown')}")
        else:
            formatted.append(f"- {dep_id}: [Not Found]")
    
    return '\n'.join(formatted)

def generate_llm_output(task, all_tasks):
    """Generate comprehensive LLM-consumable task description"""
    output = []
    
    # Header
    if task.get('is_subtask'):
        output.append(f"# SUBTASK IMPLEMENTATION REQUEST: {task['parent_task']['id']}.{task['id']}")
        output.append(f"Parent Task: #{task['parent_task']['id']} - {task['parent_task']['title']}")
    else:
        output.append(f"# TASK IMPLEMENTATION REQUEST: {task['id']}")
    
    output.append("")
    
    # Basic Information
    output.append("## Task Information")
    output.append(f"**ID**: {task['parent_task']['id']}.{task['id']}" if task.get('is_subtask') else f"**ID**: {task['id']}")
    output.append(f"**Title**: {task['title']}")
    output.append(f"**Status**: {task.get('status', 'pending')}")
    output.append(f"**Priority**: {task.get('priority', 'medium')}")
    output.append("")
    
    # Description
    output.append("## Description")
    output.append(task.get('description', 'No description provided.'))
    output.append("")
    
    # Dependencies
    output.append("## Dependencies")
    deps = task.get('dependencies', [])
    if deps:
        output.append("This task depends on the following tasks being completed first:")
        output.append(format_dependencies(deps, all_tasks))
    else:
        output.append("No dependencies - this task can be started immediately.")
    output.append("")
    
    # Implementation Details
    if task.get('details'):
        output.append("## Implementation Details")
        output.append(task['details'])
        output.append("")
    
    # Test Strategy
    if task.get('testStrategy'):
        output.append("## Test Strategy")
        output.append(task['testStrategy'])
        output.append("")
    
    # Subtasks (if any)
    if not task.get('is_subtask') and task.get('subtasks'):
        output.append("## Subtasks")
        output.append("This task has the following subtasks:")
        for st in task['subtasks']:
            status_marker = "✓" if st.get('status') == 'done' else "○"
            output.append(f"{status_marker} {task['id']}.{st['id']}: {st['title']} [{st.get('status', 'pending')}]")
        output.append("")
    
    # Context for parent task
    if task.get('is_subtask'):
        # Find parent task to show other subtasks
        parent_id = task['parent_task']['id']
        parent_task = None
        for t in all_tasks:
            if t['id'] == parent_id:
                parent_task = t
                break
        
        if parent_task and parent_task.get('subtasks'):
            output.append("## Context: Other Subtasks in Parent Task")
            for st in parent_task['subtasks']:
                if st['id'] == task['id']:
                    output.append(f"→ {parent_id}.{st['id']}: {st['title']} [CURRENT]")
                else:
                    status_marker = "✓" if st.get('status') == 'done' else "○"
                    output.append(f"{status_marker} {parent_id}.{st['id']}: {st['title']} [{st.get('status', 'pending')}]")
            output.append("")
    
    # Implementation Instructions
    output.append("## Implementation Instructions")
    output.append("Please implement this task following these guidelines:")
    output.append("1. Focus ONLY on the requirements specified above")
    output.append("2. Follow existing code patterns and conventions in the project")
    output.append("3. Include appropriate error handling and logging")
    output.append("4. Write clean, well-documented code")
    output.append("5. Ensure all tests pass as specified in the test strategy")
    
    if task.get('is_subtask'):
        output.append("6. This is a subtask - ensure it integrates properly with the parent task")
    
    output.append("")
    output.append("## Additional Context")
    output.append(f"Project Root: {script_dir}")
    output.append("Technology Stack: React Native (Expo) frontend, Supabase backend, TypeScript")
    output.append("")
    
    return '\n'.join(output)

# Find and process the task
task = find_task_or_subtask(data.get('tasks', []), task_id)

if not task:
    print(f"Error: Task with ID {task_id} not found!")
    sys.exit(1)

# Generate and print LLM-consumable output
llm_output = generate_llm_output(task, data.get('tasks', []))
print(llm_output)
EOF
    
else
    # For all other commands, use the original task-master
    exec "$SCRIPT_DIR/taskmaster/bin/task-master.js" "$@"
fi

ask your agents to explain you in details what are you doing first

@Crunchyman-ralph
Copy link
Collaborator

@neno-is-ooo lets focus on this PR and aim for a release on 0.18.0 ? Lets fix conflicts and I'll review.

Mind trying to treat claude code like a provider and use something like src/ai-providers/claude-code.js ?

@pfurini
Copy link

pfurini commented Jun 4, 2025

@neno-is-ooo lets focus on this PR and aim for a release on 0.18.0 ? Lets fix conflicts and I'll review.

Mind trying to treat claude code like a provider and use something like src/ai-providers/claude-code.js ?

I think that this PR does exactly that -> #649
Can we just review and accept that PR instead of this one?
I'm a Max user too, and wanted to try TaskMaster without incurring API costs.

@eyaltoledano
Copy link
Owner

@neno-is-ooo let's solve the conflicts, i'd like to get this one out by 0.18

@corticalstack
Copy link

@neno-is-ooo let's solve the conflicts, i'd like to get this one out by 0.18

looking forward to this as wanting to switch over to claude code max asap.

@Crunchyman-ralph
Copy link
Collaborator

I think we are going to close this in favor of #649

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants