Skip to content

Commit f8ae46b

Browse files
authored
Merge pull request arben-adm#2 from smithery-ai/smithery/config-3bya
Deployment: Dockerfile and Smithery config
2 parents b8a1475 + 2de7f24 commit f8ae46b

File tree

3 files changed

+58
-0
lines changed

3 files changed

+58
-0
lines changed

Dockerfile

Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
# Generated by https://smithery.ai. See: https://smithery.ai/docs/config#dockerfile
2+
# Use a Python image with uv pre-installed
3+
FROM ghcr.io/astral-sh/uv:python3.11-bookworm-slim AS uv
4+
5+
# Set working directory
6+
WORKDIR /app
7+
8+
# Enable bytecode compilation
9+
ENV UV_COMPILE_BYTECODE=1
10+
11+
# Copy from the cache instead of linking since it's a mounted volume
12+
ENV UV_LINK_MODE=copy
13+
14+
# Copy pyproject.toml and uv.lock for dependency installation
15+
COPY pyproject.toml uv.lock /app/
16+
17+
# Install the project's dependencies using the lockfile
18+
RUN --mount=type=cache,target=/root/.cache/uv \
19+
uv sync --frozen --no-install-project --no-dev --no-editable
20+
21+
# Add the rest of the project source code
22+
COPY . /app
23+
24+
# Install the project
25+
RUN --mount=type=cache,target=/root/.cache/uv \
26+
uv sync --frozen --no-dev --no-editable
27+
28+
FROM python:3.11-slim-bookworm
29+
30+
WORKDIR /app
31+
32+
# Copy the virtual environment from the build stage
33+
COPY --from=uv /root/.local /root/.local
34+
COPY --from=uv --chown=app:app /app/.venv /app/.venv
35+
36+
# Place executables in the environment at the front of the path
37+
ENV PATH="/app/.venv/bin:$PATH"
38+
39+
# Specify entrypoint to run the server
40+
ENTRYPOINT ["uv", "run", "mcp_sequential_thinking/server.py"]

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
# Enhanced Sequential Thinking MCP Server
22

3+
[![smithery badge](https://smithery.ai/badge/sequential-thinking)](https://smithery.ai/server/sequential-thinking)
34
This project implements an advanced Sequential Thinking server using the Model Context Protocol (MCP). It provides a structured and flexible approach to problem-solving and decision-making through a series of thought steps, incorporating stages, scoring, and tagging.
45

56
<a href="https://glama.ai/mcp/servers/m83dfy8feg"><img width="380" height="200" src="https://glama.ai/mcp/servers/m83dfy8feg/badge" alt="Sequential Thinking Server MCP server" /></a>

smithery.yaml

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
# Smithery configuration file: https://smithery.ai/docs/config#smitheryyaml
2+
3+
startCommand:
4+
type: stdio
5+
configSchema:
6+
# JSON Schema defining the configuration options for the MCP.
7+
type: object
8+
required:
9+
- serverPath
10+
properties:
11+
serverPath:
12+
type: string
13+
description: The path to the directory containing the server script.
14+
commandFunction:
15+
# A function that produces the CLI command to start the MCP on stdio.
16+
|-
17+
(config) => ({ command: 'uv', args: ['run', config.serverPath + '/server.py'] })

0 commit comments

Comments
 (0)