DataDam is a Model Context Protocol (MCP) server backed by Supabase. It supports both streamable HTTP endpoints and stdio connections, allowing multiple AI tools to share a single personal database.
Important: There is no auth yet. Do not store sensitive data. OAuth is planned.
- Quickstart Setup - Get started in minutes
- Client Configuration - Connect your AI tool (Claude, ChatGPT, Cursor, etc.)
- Tool Details - Learn how to use each tool
- Troubleshooting - Common issues and solutions
The Problem: Your AI forgets everything between conversations. You waste 10-20 minutes every time re-explaining context that should already be known.
The Solution: DataDam is a persistent memory layer that decouples your personal information from your AI tool's unstable memory. Mention something once, and it's remembered forever across all conversations.
Just talk naturally - DataDam works invisibly in the background, consulting you for consent:
-
First mention: "I'm using TypeScript with Express for my API project" → DataDam proactively stores this in
digital_products -
Weeks later: "Help me debug this API error" → DataDam retrieves your tech stack (TypeScript, Express) automatically
-
No commands needed - Your AI handles all the storage and retrieval
💡 Tip: If a datapoint you mentioned doesn't save automatically, explicitly tell the AI to save it (e.g., "save that to my interests"). The AI's ability to proactively capture information will improve over time.
❌ WITHOUT DataDam:
You: "Recommend a book for me?"
AI: "What genres do you like? What have you read? Any favorite authors?"
You: "I like sci-fi, I've read Project Hail Mary, The Martian, Children of Time,
Ender's Game... I prefer hard sci-fi with good character development, nothing
too violent, I read about 50 books a year so I've probably read most
popular ones..."
[15 minutes of typing the same preferences you've typed 20 times before]
✅ WITH DataDam:
You: "Recommend a book for me?"
AI: [Automatically retrieves: 50+ books you've read, favorite genres (hard sci-fi),
preferred themes (character development), dislikes (non-fiction books, violence), reading pace]
"Based on your love of hard sci-fi like Project Hail Mary and your preference
for character-driven stories, try 'A Memory Called Empire' by Arkady Martine.
It's got the political intrigue of Dune with deep character development,
and you haven't read it yet."
You: [0 minutes of setup. Instant, personalized answer.]
Time saved: 15 minutes → 0 minutes
❌ WITHOUT DataDam:
You: "How should I add authentication to my API?"
AI: "I can help! What's your tech stack? What database are you using?
What framework? Are you using TypeScript or JavaScript?"
You: "I'm using Node.js with Express and TypeScript, Supabase for the database
which is PostgreSQL, I deploy on Render, I use Zod for validation,
I prefer minimal dependencies, I'm working on an MCP server..."
[15 minutes re-explaining your entire tech stack for the hundredth time]
✅ WITH DataDam:
You: "How should I add authentication to my API?"
AI: [Automatically retrieves: TypeScript, Node.js, Express, Supabase (PostgreSQL),
Render deployment, Zod validation, MCP development focus]
"For your Express + TypeScript + Supabase stack, I recommend:
- Use Supabase Auth (already integrated with your PostgreSQL database)
- Add a middleware with Zod validation for JWT tokens
- Keep it minimal (matches your preference)
- Deploy config already works with your Render setup
Here's the implementation..."
You: [0 minutes explaining. Instant, stack-specific solution.]
Time saved: 15 minutes → 0 minutes
| Without DataDam | With DataDam |
|---|---|
| ❌ Repeat yourself every conversation | ✅ Mention once, remember forever |
| ❌ 10-20 min setup per conversation | ✅ 0 min setup, instant context |
| ❌ Memory resets when chat resets | ✅ Persistent memory across all chats |
| ❌ Context lost when switching AI tools | ✅ Same context across all AI tools |
| ❌ Generic, one-size-fits-all responses | ✅ Deeply personalized responses |
- contacts - People, emails, relationships
- books - Reading history and preferences
- interests - Hobbies, skills, topics you care about
- digital_products - Tools and technologies you use
- basic_information - Personal details and background
- favorite_authors - Authors you love
This is based off of personal interests and use, feel free to suggest categories to add.
-
How it works
- Your AI tool will invoke the neccessary tools in your console/command line upon needing personal information.
- It will also fill out the parameters of the call itself
- Categories group related records (e.g.,
books,contacts,basic_information). All datapoints are assigned to a category. - Tags are used as an optional refinement to narrow down results within each category
- More information on how each tool works can be found here
-
Data model
- Categories are maintained in the database and surfaced via the
data://categoriesresource, which are static at the moment. - Filtering order: choose a category first, then use
tagsto further narrow results within that category (tags are optional refinements, not replacements).
- Categories are maintained in the database and surfaced via the
-
Server tools (at
…/mcp)
| Tool | Title | Purpose | Required | Optional |
|---|---|---|---|---|
datadam_search_personal_data |
Search Personal Data | Find records by title and content; filter by categories/tags. | query |
categories, tags, classification, limit, userId |
datadam_extract_personal_data |
Extract Personal Data by Category | List items in one category, optionally filtered by tags. | category |
tags, limit, offset, userId, filters |
datadam_create_personal_data |
Create Personal Data | Store a new record with category, title, and JSON content. | category, title, content |
tags, classification, userId |
datadam_update_personal_data |
Update Personal Data | Update fields on an existing record by ID. | recordId |
title, content, tags, category, classification |
datadam_delete_personal_data |
Delete Personal Data | Delete one or more records; optional hard delete. | recordIds |
hardDelete |
- ChatGPT endpoint tools (at
…/chatgpt_mcp)
| Tool | Title | Purpose | Required | Optional |
|---|---|---|---|---|
search |
Search (ChatGPT) | Return citation-friendly results for a query. | query |
— |
fetch |
Fetch (ChatGPT) | Return full document content by ID. | id |
— |
DataDam supports two connection methods:
- Use case: Hosted deployments, multiple clients, web-based AI tools
- Setup: Deploy to cloud service (e.g., Render), configure clients with URL
- Environment: Server-side environment variables in hosting platform
- Protocol: HTTP/HTTPS with MCP over streamable transport
- Use case: Local development, single-client setups, desktop AI applications
- Setup: Run server.js locally, configure clients to launch the process
- Environment: Local environment variables or passed via client config
- Protocol: MCP over stdio transport with direct process communication
- Homebrew: Package Manager for MacOS and Linux - Homebrew
- Git: Version control system - Download Git
- Node.js + npm: JavaScript runtime and package manager - Download Node.js
- Accounts: Supabase (required), Render (for hosting)
1. Clone this repository:
git clone https://github.com/KennethLeeJE8/datadam_mcp.git && cd datadam_mcp2. Install dependencies:
npm install3. Build the TypeScript code:
npm run buildHappy to help if you have any problems w the setup! Shoot me a message or send me an email at kennethleeje8@gmail.com :)
1. Create a Supabase account
- Go to Supabase Sign Up to create your account
- Important: Remember your password - you'll need it for the database connection later
- Create a new project and wait for it to finish setting up
2. Load the database schema in Supabase SQL Editor:
- Copy the entire contents of src/database/schema.sql
- Supabase Dashboard → SQL Editor → New query
- Paste the copied schema code into the editor
- Click "Run" to execute the schema
3. You should see your Supabase table editor view populated with tables in "Table Editor".
✅ Supabase setup is complete! Your database is ready to use.
Select the connection method based on your AI tools and subscription tiers:
-
Option A: Stdio (Standard Input/Output)
- Use for: Coding agents (Cursor, Windsurf, etc.), Claude Desktop (Free tier)
- Next step: Continue to Local Testing section below
-
Option B: HTTP Streamable
- Use for: ChatGPT Plus or higher, Claude Pro or higher, Coding agents (Cursor, Windsurf, etc.)
- Next step: Skip to Render Deployment section
1. Set up environment variables by cloning the .env file:
cp .env.example .envEdit .env and add your Supabase credentials:
To find your SUPABASE_URL:
- Supabase Dashboard → Project Settings → Data API → Project URL
To find your SUPABASE_SERVICE_ROLE_KEY:
- Supabase Dashboard → Project Settings → API Keys → service_role (click "Reveal" to copy)
2. Test the connection with the MCP Inspector:
npm run inspector:stdio- Transport: Select "stdio"
- Arguments: Enter "server.js"
- Click "Connect"
3. Verify the setup:
- Verify: The inspector should connect and show available tools, confirming Supabase database connection
- Test: Go to the Tools tab and click "List Tools" → find "extract_personal_data_tool" → enter "interests" for categories → click "Run Tool" to verify database connectivity
- You should see a datapoint on "MCP (Model Context Protocol)"
Feel free to use any hosting platform, this is personal preference.
1. Create a Render account at Render or sign in if you have an existing account
You'll be prompted to fill in the required environment variables:
- Ensure that branch is
main SUPABASE_URL- Get from: Supabase Dashboard → Project Settings → API → Project URLSUPABASE_SERVICE_ROLE_KEY- Get from: Supabase Dashboard → Project Settings → API → Project API keys → service_role (click "Reveal" to copy)
Ensure that the environment variables are filled out correctly, otherwise the deployment will fail.
- Health endpoint:
curl http://{render_url}/health
- Test:
Go back to the Command Line and run:
npm run inspector:http
- Server URL: Enter
http://<YOUR_RENDER_URL>/mcp - Transport: Select "HTTP"
- Click "Connect"
- Server URL: Enter
- Verify: The inspector should connect and show available tools, confirming Supabase database connection
- Test: Go to the Tools tab and click "List Tools" → find "extract_personal_data_tool" → enter "interests" for categories → click "Run Tool" to verify database connectivity
For hosted deployments using streamable HTTP:
Notes
- The server's database credentials belong in hosting platform environment variables, not in clients
Claude Desktop (Custom Connector)
- Open Claude Desktop → Connectors → Add Custom Connector.
- Name:
dataDam - Type: HTTP
- URL:
https://<YOUR_RENDER_URL>/mcp
ChatGPT (Connectors / Deep Research)
- Note: ChatGPT only supports HTTP connections
- Requirement: Custom connectors require ChatGPT Pro, Business, Enterprise, or Edu subscription
- Enable Developer Mode in Settings → Connectors → Advanced → Developer mode.
- Add a custom MCP server using the ChatGPT endpoint:
- URL:
https://<YOUR_RENDER_URL>/chatgpt_mcp
- URL:
- The server implements
searchandfetchas required.
Cursor (and similar coding agents)
- Many editors/agents use a similar JSON shape for MCP servers. Adapt paths and UI as needed.
{
"mcpServers": {
"dataDam": {
"type": "http",
"url": "https://<YOUR_RENDER_URL>/mcp"
}
}
}
Generic MCP Clients
- If your tool supports MCP over HTTP, configure:
- Type:
http - URL:
https://<service>.onrender.com/mcp
- Type:
For local development using stdio transport:
Notes
- Clone this repository locally and use the
server.jsfile - The client launches the server process directly
MCP Client Config:
{
"mcpServers": {
"dataDam": {
"command": "node",
"args": ["path/to/server.js"],
"env": {
"SUPABASE_URL": "your_supabase_url",
"SUPABASE_SERVICE_ROLE_KEY": "your_service_role_key"
}
}
}
}server.js and replace environment variables with your actual Supabase credentials
Claude Desktop
- Open Claude Desktop → Settings → Developer → Edit Config
- Add the MCP server configuration
Claude Code
- Open your
.claude.jsonfile in your IDE (use search tool to search for "mcp" if you can't find it) - Add the MCP server configuration under mcpServers
Config file locations:
- Codex:
~/.codex/config.toml(see docs) - Other coding agents: Similar JSON format in their respective config files
Works w all the coding agents
- Framework: Express.js with TypeScript
- Database: Supabase (PostgreSQL) with Row Level Security
- MCP SDK:
@modelcontextprotocol/sdk - CORS: Configured for browser-based clients
- Environment: dotenv for configuration management
Categories available:
- interests
- digital_products
- favourite_authors
- basic_information
- contacts
- books
The MCP Server is designed to teach the AI to retrieve personal information it needs to answer your questions. Your AI tool should make tool calls as it needs personal context to give you a better answer.
Tips to use tools:
- Mention DataDam MCP in your prompt to let the AI tool know your want data from it
- Using "my {category_name}" in your query will trigger the AI to use DataDam
- Ensure to use plural form for the categories, such as 'books' instead of book, 'contacts' instead of 'contact
You can add categories in the category_resgistry table and it will dynamically update in resources.
-
datadam_search_personal_data
- Purpose: Find records by title and content; optionally filter by categories and tags.
- Args:
query(required);categories?string[];tags?string[];classification?one ofpublic|personal|sensitive|confidential;limit?number (default 20);userId?string (UUID). - Example:
{ "query": "John", "categories": ["contacts"], "limit": 10 }
-
datadam_extract_personal_data
- Purpose: List items in a single category; refine with tags.
- Args:
category(required string);tags?string[];limit?number (default 50);offset?number;userId?string (UUID);filters?object. - Example:
{ "category": "contacts", "tags": ["family"], "limit": 20 }
-
datadam_create_personal_data
- Purpose: Store a new record.
- IMPORTANT: Create ONE entry per entity. If storing 2 books, make 2 separate tool calls. If storing 3 contacts, make 3 separate tool calls. Never batch multiple entities into one record.
- Args:
category(required string);title(required string);content(required object/JSON);tags?string[];classification?(defaultpersonal);userId?string (UUID). - Example:
{ "category": "documents", "title": "Passport", "content": { "number": "A123...", "country": "US" }, "tags": ["important"] }
-
datadam_update_personal_data
- Purpose: Update fields on an existing record by ID.
- Args:
recordId(required string UUID); plus any fields to change:title?,content?,tags?,category?,classification?. - Example:
{ "recordId": "<UUID>", "title": "Emergency Contact – Updated" }
-
datadam_delete_personal_data
- Purpose: Delete one or more records; optional hard delete for permanent removal.
- Args:
recordIds(required string[] of UUIDs);hardDelete?boolean (default false). - Example:
{ "recordIds": ["<UUID1>", "<UUID2>"], "hardDelete": false }
-
search
- Purpose: Return citation-friendly results for a query.
- Args:
query(required string). - Example:
{ "query": "contacts" }
-
fetch
- Purpose: Return full document content by ID.
- Args:
id(required string UUID). - Example:
{ "id": "<DOCUMENT_ID>" }
If you want to scope data to specific users, you can set up user authentication and profiles:
- Create a user in Supabase Authentication → Users; copy the UUID for later.
- Insert a profile row for your Auth user:
INSERT INTO profiles (user_id, username, full_name, metadata) VALUES ('<AUTH_USER_UUID>'::uuid, 'your_username', 'Your Name', '{}'::jsonb);
- Some tools can scope operations to a particular user by accepting a
userIdargument (UUID from Supabase Auth). This field is optional. - If your client supports passing environment variables to tool calls, you may set a convenience variable like
DATABASE_USER_IDin the client's MCP config and have your prompts/tools use it when needed. - Otherwise, just supply
userIdexplicitly in the tool call input when you want to target a specific user.
-
Health check fails
- Verify Render env vars are set; inspect Render logs
- Confirm Supabase URL/key values
-
Empty categories/data
- Insert data; run
select * from get_active_categories();
- Insert data; run
-
Categories not updating after adding new category
- Categories are loaded when the MCP connection is established
- After adding a new category to the
category_registrytable, restart your AI client to establish a new connection - For Claude Desktop: Restart the application
- For Cursor/coding agents: Reload the window or restart the editor
- For HTTP connections: The client will reconnect on next request
-
Client cannot connect
- Use the
…/mcpURL (or…/chatgpt_mcpfor ChatGPT) - Check CORS/firewall and that the service is not sleeping (Starter tier)
- Use the
- No authentication yet — do not store sensitive data
- Use
SUPABASE_SERVICE_ROLE_KEY(server-side only in Render) for full functionality and the complete toolset. - OAuth and stronger auth are planned
- If you need read/limited writes only, you can deploy with
SUPABASE_ANON_KEYinstead of the service role key. - Writes will depend on your Row Level Security (RLS) policies, and some tools (create/update/delete) may fail under anon.
MIT License