Skip to content

Devora-AS/devora-prompt-assistant-mcp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

37 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Devora Prompt Assistant (MCP Server)

CI Status NPM Version NPM Downloads License Node Version Security Status Production Ready

Anthropic OpenAI Azure OpenAI Google Gemini Perplexity

πŸš€ Production-Ready MCP Server - Transform raw coding prompts into structured, enhanced prompts using multiple AI providers with enterprise-grade security, monitoring, and reliability.

Table of Contents

Overview

The Devora Prompt Assistant is a production-ready Model Context Protocol (MCP) server that transforms your raw coding prompts into structured, enhanced prompts optimized for AI assistants. Built with enterprise-grade security, comprehensive monitoring, and high reliability following all 14 MCP Server Best Practices.

What Makes This Special?

  • 🎯 Production-Grade: Implements all 14 MCP Server Best Practices for enterprise use
  • πŸ”’ Security First: Defense-in-depth security with rate limiting, circuit breakers, and input sanitization
  • πŸ“Š Full Observability: Comprehensive metrics, tracing, and structured logging
  • ⚑ High Performance: >100 req/s (stdio), >500 req/s (HTTP) with intelligent caching
  • πŸ›‘οΈ Resilient: Circuit breaker protection, graceful degradation, and 99.9% uptime
  • πŸ”§ Multi-Provider: Support for 5 AI providers with automatic failover

Key Features

🧠 Intelligent Prompt Enhancement

  • Use Case Auto-Detection: Automatically detects debugging, refactoring, feature creation, architecture decisions, tech comparison, and content design
  • Framework Detection: Detects your tech stack (React, Vue, Angular, Node.js, Python, PHP, etc.) and adjusts suggestions
  • Smart Question Generation: Generates clarifying questions when prompts are vague or incomplete
  • Structured Templates: Enforces consistent markdown sections with use-case-specific scaffolds

πŸ” Intelligent Context Management

  • Git Integration: Automatically detects git repos and uses git diff for changed files
  • Smart Filtering: Honors .gitignore patterns and excludes common build directories
  • Multiple Strategies: changed, paths, and related collection strategies
  • Intelligent Caching: LRU cache with 10-minute TTL for fast repeat requests

πŸ”’ Enterprise Security

  • Defense in Depth: 6-layer security model with network isolation, authentication, authorization, validation, sanitization, and rate limiting
  • Circuit Breaker: Prevents cascade failures with automatic recovery
  • Input/Output Sanitization: Protects against injection attacks and data leaks
  • Secret Redaction: API keys and tokens automatically redacted from logs

πŸ“Š Production Monitoring

  • Comprehensive Metrics: Track throughput, latency, error rate, cache hit rate, and memory usage
  • Distributed Tracing: Full request lifecycle tracking with trace ID propagation
  • Structured Logging: JSON logs with rotation, separate error/audit files
  • Health Checks: /health, /ready, and /metrics endpoints

⚑ High Performance

  • Connection Pooling: Reuse HTTP connections for LLM providers
  • Intelligent Caching: 15-minute TTL with size-based eviction
  • Memory Guards: Automatic cache clearing at 90% memory usage
  • Batch Operations: Optimized file reads and context collection

Quick Start

πŸš€ One-Click Cursor Installation

Install MCP Server

Click the button above to automatically add this MCP server to Cursor.

⚠️ Important: At least one AI provider API key is required. The server will auto-detect which providers are available.

πŸ§ͺ Testing: Set TEST_MODE=true to run without API keys for testing purposes.

πŸ“‹ Manual Installation

Add this to your Cursor MCP settings (~/.cursor/mcp.json):

{
  "mcpServers": {
    "devora-prompt-assistant": {
      "command": "npx",
      "args": ["-y", "@devora_no/prompt-assistant-mcp"],
      "env": {
        "TRANSPORT": "stdio",
        "OPENAI_API_KEY": "your-openai-key-here",
        "ANTHROPIC_API_KEY": "your-anthropic-key-here"
      }
    }
  }
}

Set your API keys (at least one required):

export OPENAI_API_KEY="your-openai-key"
export ANTHROPIC_API_KEY="your-anthropic-key"
# ... or any other provider

Restart Cursor and you're ready to go!

Installation

Prerequisites

  • Node.js: 20+ (recommended: latest LTS)
  • Package Manager: pnpm (recommended), npm, or yarn
  • AI Provider: At least one API key from supported providers

Installation Methods

Option 1: NPM (Recommended)

# One-time use
npx @devora_no/prompt-assistant-mcp

# Global installation
npm install -g @devora_no/prompt-assistant-mcp
devora-prompt-assistant

# Alias
npx dpa

Option 2: Development Setup

# Clone repository
git clone https://github.com/Devora-AS/devora-prompt-assistant-mcp.git
cd devora-prompt-assistant-mcp

# Install dependencies
pnpm install

# Copy environment template
cp .env.example .env

# Edit with your API keys
nano .env

Option 3: Docker

# Run with Docker
docker run -p 8000:8000 \
  -e OPENAI_API_KEY=your_key_here \
  -e AUTH_BEARER_TOKENS=your_token_here \
  ghcr.io/devora-as/devora-prompt-assistant-mcp

Inspector (stdio) Quick Start

πŸ” Testing with MCP Inspector

For development and debugging, use MCP Inspector with stdio transport:

  1. Build the project:

    pnpm install && pnpm build
  2. Choose your configuration:

    • Published package: Load examples/inspector-stdio.json
    • Local development: Load examples/inspector-stdio-local.json
  3. Test the tools:

    • Verify collect_context and enhance_prompt are listed
    • Run test scenarios from docs/inspector-playbook.md

πŸ› Debug Mode

Enable detailed logging by setting CONTEXT_DEBUG=1 in your environment:

{
  "env": {
    "CONTEXT_DEBUG": "1",
    "LOG_LEVEL": "debug"
  }
}

This provides comprehensive trace information for debugging file collection, git integration, and performance.

Usage

🎯 Core Workflow

  1. Collect Context (optional but recommended):

    {
      "strategy": "changed",
      "maxKB": 32,
      "maxFiles": 20,
      "extensions": ["ts", "tsx", "js", "jsx"]
    }
  2. Enhance Prompt:

    {
      "task": "Refactor this React component to use TypeScript",
      "context": "[context from collect_context]",
      "audience": "cursor",
      "style": "detailed"
    }

πŸ› οΈ Available Tools

enhance_prompt - Prompt Enhancement

Transforms raw coding prompts into structured, enhanced prompts with use-case detection and smart question generation.

Parameters:

  • task (string, required): The coding task to enhance
  • context (string, optional): Additional context from workspace
  • audience (string, optional): Target audience (cursor, claude, copilot, general)
  • style (string, optional): Response style (concise, detailed)
  • constraints (array, optional): Specific constraints
  • provider (string, optional): AI provider to use
  • temperature (number, optional): Generation temperature (0-2)
  • maxTokens (number, optional): Maximum tokens to generate

collect_context - Workspace Context Collection

Intelligently collects relevant files and context from the workspace using git awareness and smart filtering.

Parameters:

  • strategy (string, optional): Collection strategy (changed, paths, related)
  • maxKB (number, optional): Maximum total size in KB
  • maxFiles (number, optional): Maximum number of files
  • include (array, optional): Glob patterns to include
  • exclude (array, optional): Glob patterns to exclude
  • useGit (boolean, optional): Enable git integration
  • extensions (array, optional): File extensions to include

πŸ”§ LLM Enhancement Modes

  • review (default): Minor improvements, structure validation
  • refine: Comprehensive content enhancement
  • off: Deterministic scaffold only, no LLM calls

🌐 Provider Support

Provider Default Model Temperature Max Tokens Notes
Anthropic claude-3-5-sonnet-latest βœ“ maxTokens -
OpenAI o3-mini βœ“ maxTokens Chat Completions
Azure OpenAI gpt-4o-mini βœ“ maxTokens Deployment required
Gemini gemini-2.0-flash βœ“ maxOutputTokens Different param name
Perplexity sonar βœ“ maxTokens -

Documentation

πŸ“š Complete Documentation

🎯 Use Case Examples

Security & Privacy

πŸ”’ Security Features

  • Defense in Depth: 6-layer security model
  • Rate Limiting: Token bucket algorithm per-client
  • Circuit Breaker: Prevents cascade failures
  • Input/Output Sanitization: Protection against injection attacks
  • Secret Redaction: API keys automatically redacted from logs
  • Bearer Token Authentication: Secure HTTP transport

πŸ›‘οΈ Privacy Protection

  • Local First: All processing happens locally in stdio mode
  • No Data Storage: No code or prompts stored or transmitted
  • Secret Redaction: Sensitive data automatically redacted
  • Context Collection: Optional workspace scanning with user control

Performance

πŸ“Š Performance Metrics

  • Throughput: >100 req/s (stdio), >500 req/s (HTTP)
  • Latency P95: <100ms (deterministic), <2s (with LLM)
  • Error Rate: <0.1% under normal load
  • Memory: <512MB per instance with auto-clearing
  • Cache Hit Rate: >70% for repeated queries
  • Uptime: 99.9% with circuit breaker protection

⚑ Optimization Features

  • Connection Pooling: Reuse HTTP connections
  • Intelligent Caching: 15-minute TTL with LRU eviction
  • Memory Guards: Automatic cache clearing at 90% usage
  • Batch Operations: Optimized file reads and context collection

Development

πŸ—οΈ Project Structure

src/
β”œβ”€β”€ core/            # Core utilities (security, monitoring, caching)
β”‚   β”œβ”€β”€ security/    # Rate limiting, sanitization, circuit breaker
β”‚   β”œβ”€β”€ metrics.ts   # Performance monitoring
β”‚   β”œβ”€β”€ tracing.ts   # Distributed tracing
β”‚   └── fileLogger.ts # Structured logging
β”œβ”€β”€ config/          # Environment and configuration management
β”œβ”€β”€ providers/       # AI provider adapters
β”œβ”€β”€ server/          # MCP server and transports
β”œβ”€β”€ auth/            # Authentication middleware
└── index.ts         # CLI entry point

πŸ› οΈ Available Scripts

# Development
pnpm dev:stdio       # Run with stdio transport
pnpm dev:http        # Run with HTTP transport

# Building
pnpm build           # Build TypeScript to dist/
pnpm prepare         # Build and set executable bit

# Testing
pnpm test            # Run unit tests
pnpm test:watch      # Run tests in watch mode
pnpm test:coverage   # Run with coverage

# Code Quality
pnpm lint            # Run ESLint
pnpm lint:fix        # Fix ESLint issues
pnpm format          # Format with Prettier

πŸ§ͺ Testing

  • Unit Tests: >80% coverage
  • Integration Tests: All tool workflows
  • Chaos Tests: Resilience under failure conditions
  • Performance Tests: KPI benchmarking

πŸ”§ Troubleshooting

Quick Fixes

Server won't start?

  • Set at least one API key or use TEST_MODE=true
  • Check your configuration in ~/.cursor/mcp.json

Git errors in collect_context?

  • Use strategy: "paths" instead of "changed"
  • Or initialize a git repository

Connection issues?

  • Verify the server is running
  • Check for port conflicts
  • Ensure proper MCP configuration

Test Mode

Run without API keys for testing:

# Test mode (no API keys needed)
TEST_MODE=true npx @devora_no/prompt-assistant-mcp

# Or in MCP config
{
  "env": {
    "TEST_MODE": "true"
  }
}

Health Check

Check server status and configuration:

npx @modelcontextprotocol/inspector --cli npx -y @devora_no/prompt-assistant-mcp --method tools/call --tool-name health_check

Debug Mode

Enable detailed logging:

CONTEXT_DEBUG=1 LOG_LEVEL=debug npx @devora_no/prompt-assistant-mcp

Common Issues

Problem Solution
"No providers configured" Set API key or TEST_MODE=true
"No git history detected" Use strategy: "paths"
"Connection closed" Restart server, check logs
Slow responses Check API limits, enable caching

πŸ“– Full troubleshooting guide: docs/troubleshooting.md

FAQ

General Questions

Q: What is MCP? A: The Model Context Protocol (MCP) is a standard for connecting AI assistants to data sources and tools. This server implements the MCP specification.

Q: Which AI providers are supported? A: Anthropic Claude, OpenAI, Azure OpenAI, Google Gemini, and Perplexity. At least one API key is required.

Q: Is this production-ready? A: Yes! This implements all 14 MCP Server Best Practices with enterprise-grade security, monitoring, and reliability.

Installation and Setup

Q: How do I install this in Cursor? A: Use the one-click installation button above, or manually add the configuration to your ~/.cursor/mcp.json file.

Q: Do I need all provider API keys? A: No, you only need at least one. The server will auto-detect which providers are available.

Q: What's the difference between stdio and HTTP transport? A: Stdio is for local development (recommended), HTTP is for remote access (experimental in v0.2.1).

API and Integration

Q: How do I use the tools? A: The tools are automatically available in Cursor. Use collect_context to gather workspace files, then enhance_prompt to improve your prompts.

Q: Can I use this with other MCP clients? A: Yes, this implements the standard MCP protocol and works with any MCP-compatible client.

Q: What are the input limits? A: Total input: 64KB, Task: 32KB, Context: 16KB. These limits ensure optimal performance.

Security

Q: Is my code safe? A: Yes, in stdio mode all processing happens locally. No code or prompts are stored or transmitted to third parties.

Q: Are API keys secure? A: Yes, API keys are never logged and are automatically redacted from error messages.

Q: What about rate limiting? A: The server implements token bucket rate limiting per-client to prevent abuse.

Troubleshooting

Q: "No tools, prompts, or resources" error? A: Check your MCP configuration, ensure API keys are set, and restart Cursor.

Q: "No providers configured" error? A: Set at least one provider API key in your environment variables.

Q: How do I enable debug logging? A: Set LOG_LEVEL=debug in your environment variables.

Contributing

We welcome contributions! Please see our Contributing Guide for details.

How to Contribute

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests
  5. Run linting and tests
  6. Submit a pull request

Reporting Issues

License

MIT License - see LICENSE file for details.


Last Updated: January 15, 2025
Version: 0.2.1
Status: Production Ready
Security Status: βœ… Secured & Monitored
Maintained by: Devora

πŸ“„ License

MIT License - see LICENSE file for details.


Developed by Devora β˜”οΈ

Brave β€’ Innovative β€’ Responsible β€’ Creative β€’ Different