A unified MCP (Model Context Protocol) client library that enables any LLM to connect to MCP servers and build custom agents with tool access. This library provides a high-level Python interface for connecting LangChain-compatible LLMs to MCP tools like web browsing, file operations, and more.
- π§ Multi-transport Support: Connect via stdio, HTTP, WebSocket, or sandboxed execution
- π€ LangChain Integration: Works with any LangChain-compatible LLM
- π Advanced Token Counting: Precise token tracking and management
- π‘οΈ Security-First: Built-in security best practices and sandboxing
- β‘ High Performance: Async/await architecture for optimal performance
- π― Agent Framework: High-level agent interface with conversation memory
- π Observability: Built-in telemetry and monitoring support
pip install -e ".[dev,anthropic,openai,e2b,search]"
import asyncio
from mcp_use import MCPClient
async def main():
# Initialize client with configuration
client = MCPClient()
# Connect to MCP servers
await client.connect_to_server("playwright", {
"command": "npx",
"args": ["@playwright/mcp@latest"]
})
# Use tools
tools = await client.get_available_tools()
result = await client.call_tool("browse_web", {"url": "https://example.com"})
print(result)
if __name__ == "__main__":
asyncio.run(main())
from mcp_use.agents import MCPAgent
from langchain_openai import ChatOpenAI
# Create LLM
llm = ChatOpenAI(model="gpt-4")
# Create agent with MCP tools
agent = MCPAgent(
llm=llm,
config_path="mcp_config.json"
)
# Use the agent
response = await agent.run("Browse to example.com and summarize the content")
print(response)
Create an mcp_config.json
file:
{
"mcpServers": {
"playwright": {
"command": "npx",
"args": ["@playwright/mcp@latest"],
"env": { "DISPLAY": ":1" }
},
"filesystem": {
"command": "python",
"args": ["-m", "mcp_server_filesystem", "/path/to/files"]
}
}
}
The library includes an advanced token counting system:
from mcp_use.token_counting import TokenCountingFactory
# Create token counter
counter = TokenCountingFactory.create_counter(
provider="openai",
model="gpt-4",
openai_api_key="your-key"
)
# Count tokens
usage = await counter.count_tokens(messages)
print(f"Input: {usage.input_tokens}, Output: {usage.output_tokens}")
- MCPClient: Main entry point for MCP server management
- MCPAgent: High-level agent interface using LangChain
- MCPSession: Individual MCP server connection management
- Connectors: Transport layer abstractions (stdio, HTTP, WebSocket, sandbox)
- ServerManager: Dynamic server selection capabilities
- Stdio: Process-based MCP servers
- HTTP: HTTP-based MCP servers with SSE
- WebSocket: WebSocket-based MCP servers
- Sandbox: E2B sandboxed execution for security
# Create virtual environment
python -m venv env
source env/bin/activate # On Windows: env\Scripts\activate
# Install for development
pip install -e ".[dev,search]"
# Run all tests
pytest
# Run with coverage
pytest --cov=mcp_use --cov-report=html
# Run specific test types
pytest tests/unit/ # Unit tests
pytest tests/integration/ # Integration tests
# Format and lint
ruff check --fix
ruff format
# Type checking
mypy mcp_use/
from mcp_use.agents import MCPAgent
from langchain_anthropic import ChatAnthropic
agent = MCPAgent(
llm=ChatAnthropic(model="claude-3-sonnet-20240229"),
config={
"mcpServers": {
"playwright": {
"command": "npx",
"args": ["@playwright/mcp@latest"]
}
}
}
)
result = await agent.run("Find the latest news on AI developments")
config = {
"mcpServers": {
"filesystem": {
"command": "python",
"args": ["-m", "mcp_server_filesystem", "./documents"]
}
}
}
agent = MCPAgent(llm=your_llm, config=config)
result = await agent.run("Analyze all Python files in the project")
config = {
"mcpServers": {
"web": {
"command": "npx",
"args": ["@playwright/mcp@latest"]
},
"files": {
"command": "python",
"args": ["-m", "mcp_server_filesystem", "./data"]
},
"database": {
"url": "http://localhost:8080/mcp"
}
}
}
- Environment variable-based API key management
- Sandboxed execution support via E2B
- Tool access restrictions via
disallowed_tools
- Proper resource cleanup and connection management
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Run the test suite
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
For issues and questions:
- Create an issue on GitHub
- Check the documentation
- Review the examples directory