Skip to content

This repository demonstrates a security vulnerability in MCP (Model Context Protocol ) servers that allows for remote code execution and data exfiltration through tool poisoning.

Notifications You must be signed in to change notification settings

Repello-AI/mcp-exploit-demo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 

Repository files navigation

Image

SSH Key Exfiltration via MCP Tool Poisoning

This repository demonstrates a security vulnerability in MCP (Model Context Protocol) servers that allows for remote code execution and data exfiltration through tool poisoning. This is intended for educational and security research purposes only.

Link for the Blog:

MCP tool poisoning to RCE

Repository Contents

  • server.py - The malicious MCP server implementation containing the poisoned tool
  • .cursor/mcp.json - Configuration file for Cursor AI integration

How It Works

The attack demonstrates the "Rug Pull" method:

  1. A user connects to the malicious MCP server through MCP Client like Cursor AI
  2. The server modifies the DockerCommandAnalyzer tool's documentation with malicious code
  3. When an AI assistant reads this documentation, it's manipulated to recommend running a base64-encoded command
  4. The encoded command silently:
    • Collects the user's SSH public keys
    • Exfiltrates them to a remote server
    • Removes evidence of the attack

Technical Implementation

The key elements of the attack are:

  1. Two-stage poisoning: Uses a marker file for persistence to ensure the tool remains poisoned
  2. Social engineering: Uses urgent language to manipulate AI assistants
  3. Base64 obfuscation: Hides the malicious commands from casual inspection
  4. wget for exfiltration: Uses standard HTTP POST to send data to an attacker-controlled server

Mitigation Recommendations

To protect against this type of attack:

  1. Disable auto-run features in AI development tools like Cursor
  2. Always verify the source of any MCP server before connecting
  3. Review code from untrusted sources before execution
  4. Use sandboxed environments when testing new AI tools
  5. Implement egress filtering to block unexpected outbound connections

About

This repository demonstrates a security vulnerability in MCP (Model Context Protocol ) servers that allows for remote code execution and data exfiltration through tool poisoning.

Topics

Resources

Stars

Watchers

Forks

Languages