Skip to content

iOSDevSK/mcp-client

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

MCP-Use Client

A unified MCP (Model Context Protocol) client library that enables any LLM to connect to MCP servers and build custom agents with tool access. This library provides a high-level Python interface for connecting LangChain-compatible LLMs to MCP tools like web browsing, file operations, and more.

Features

  • πŸ”§ Multi-transport Support: Connect via stdio, HTTP, WebSocket, or sandboxed execution
  • πŸ€– LangChain Integration: Works with any LangChain-compatible LLM
  • πŸ“Š Advanced Token Counting: Precise token tracking and management
  • πŸ›‘οΈ Security-First: Built-in security best practices and sandboxing
  • ⚑ High Performance: Async/await architecture for optimal performance
  • 🎯 Agent Framework: High-level agent interface with conversation memory
  • πŸ“ˆ Observability: Built-in telemetry and monitoring support

Quick Start

Installation

pip install -e ".[dev,anthropic,openai,e2b,search]"

Basic Usage

import asyncio
from mcp_use import MCPClient

async def main():
    # Initialize client with configuration
    client = MCPClient()
    
    # Connect to MCP servers
    await client.connect_to_server("playwright", {
        "command": "npx",
        "args": ["@playwright/mcp@latest"]
    })
    
    # Use tools
    tools = await client.get_available_tools()
    result = await client.call_tool("browse_web", {"url": "https://example.com"})
    
    print(result)

if __name__ == "__main__":
    asyncio.run(main())

Agent Usage

from mcp_use.agents import MCPAgent
from langchain_openai import ChatOpenAI

# Create LLM
llm = ChatOpenAI(model="gpt-4")

# Create agent with MCP tools
agent = MCPAgent(
    llm=llm,
    config_path="mcp_config.json"
)

# Use the agent
response = await agent.run("Browse to example.com and summarize the content")
print(response)

Configuration

Create an mcp_config.json file:

{
  "mcpServers": {
    "playwright": {
      "command": "npx",
      "args": ["@playwright/mcp@latest"],
      "env": { "DISPLAY": ":1" }
    },
    "filesystem": {
      "command": "python",
      "args": ["-m", "mcp_server_filesystem", "/path/to/files"]
    }
  }
}

Token Counting System

The library includes an advanced token counting system:

from mcp_use.token_counting import TokenCountingFactory

# Create token counter
counter = TokenCountingFactory.create_counter(
    provider="openai",
    model="gpt-4",
    openai_api_key="your-key"
)

# Count tokens
usage = await counter.count_tokens(messages)
print(f"Input: {usage.input_tokens}, Output: {usage.output_tokens}")

Architecture

Core Components

  • MCPClient: Main entry point for MCP server management
  • MCPAgent: High-level agent interface using LangChain
  • MCPSession: Individual MCP server connection management
  • Connectors: Transport layer abstractions (stdio, HTTP, WebSocket, sandbox)
  • ServerManager: Dynamic server selection capabilities

Supported Transports

  • Stdio: Process-based MCP servers
  • HTTP: HTTP-based MCP servers with SSE
  • WebSocket: WebSocket-based MCP servers
  • Sandbox: E2B sandboxed execution for security

Development

Setup

# Create virtual environment
python -m venv env
source env/bin/activate  # On Windows: env\Scripts\activate

# Install for development
pip install -e ".[dev,search]"

Testing

# Run all tests
pytest

# Run with coverage
pytest --cov=mcp_use --cov-report=html

# Run specific test types
pytest tests/unit/          # Unit tests
pytest tests/integration/   # Integration tests

Code Quality

# Format and lint
ruff check --fix
ruff format

# Type checking
mypy mcp_use/

Examples

Web Browsing Agent

from mcp_use.agents import MCPAgent
from langchain_anthropic import ChatAnthropic

agent = MCPAgent(
    llm=ChatAnthropic(model="claude-3-sonnet-20240229"),
    config={
        "mcpServers": {
            "playwright": {
                "command": "npx",
                "args": ["@playwright/mcp@latest"]
            }
        }
    }
)

result = await agent.run("Find the latest news on AI developments")

File Operations

config = {
    "mcpServers": {
        "filesystem": {
            "command": "python",
            "args": ["-m", "mcp_server_filesystem", "./documents"]
        }
    }
}

agent = MCPAgent(llm=your_llm, config=config)
result = await agent.run("Analyze all Python files in the project")

Multi-Server Setup

config = {
    "mcpServers": {
        "web": {
            "command": "npx",
            "args": ["@playwright/mcp@latest"]
        },
        "files": {
            "command": "python",
            "args": ["-m", "mcp_server_filesystem", "./data"]
        },
        "database": {
            "url": "http://localhost:8080/mcp"
        }
    }
}

Security

  • Environment variable-based API key management
  • Sandboxed execution support via E2B
  • Tool access restrictions via disallowed_tools
  • Proper resource cleanup and connection management

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests
  5. Run the test suite
  6. Submit a pull request

License

This project is licensed under the MIT License - see the LICENSE file for details.

Links

Support

For issues and questions:

  • Create an issue on GitHub
  • Check the documentation
  • Review the examples directory

About

mcp-client for woo-mcp server

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 14

Languages