A prototype toolhost that enables LLMs to perform basic file system operations through MCP over HTTP.
createFile– Create new filesreadFile– Read file contentupdateFile– Overwrite file contentappendToFile– Append text to a filedeleteFile– Delete fileslistFiles– List contents of a directorydescribeServer– Returns server tool list & usage guide
- Server: Exposes tools via MCP HTTP transport using Express and
@modelcontextprotocol/sdk. - Client: Connects to MCP, fetches tool list, routes user input to an LLM (e.g., Ollama), and invokes tools based on LLM output.
MCP-FS uses Ollama to run local LLMs and respond to file system tool requests.
Ollama allows you to run models like mistral, llama2, codellama, and others locally via a simple API.
Official GitHub: ollama/ollama
ollama serveBy default, Ollama runs at http://localhost:11434.
ollama pull mistral-nemoShould list available models (confirming Ollama is up and the model is installed):
curl http://localhost:11434/api/tagsgit clone https://github.com/your-username/mcp-fs.git
cd mcp-fsnpm installnpm run servernpm run client- Ollama Website: https://ollama.com
- Ollama GitHub: https://github.com/ollama/ollama
- Model Library: https://ollama.com/library