Skip to content

Commit b8a1475

Browse files
committed
Enhance README and server functionality for Sequential Thinking MCP Server
- Updated project description and features in README.md - Added new features: thought categorization, dynamic adaptation, and summary generation - Introduced ThoughtStage enum for structured thought processing in server.py - Enhanced ThoughtData class with scoring and tagging capabilities - Implemented generate_summary method to summarize the thinking process
1 parent 7cc8bb3 commit b8a1475

File tree

2 files changed

+98
-34
lines changed

2 files changed

+98
-34
lines changed

README.md

Lines changed: 37 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,16 @@
1-
# Sequential Thinking MCP Server
1+
# Enhanced Sequential Thinking MCP Server
22

3-
A Model Context Protocol (MCP) server that helps break down complex problems into clear, sequential steps. This tool enhances structured problem-solving by managing thought sequences, allowing revisions, and supporting multiple solution paths.
3+
This project implements an advanced Sequential Thinking server using the Model Context Protocol (MCP). It provides a structured and flexible approach to problem-solving and decision-making through a series of thought steps, incorporating stages, scoring, and tagging.
44

55
<a href="https://glama.ai/mcp/servers/m83dfy8feg"><img width="380" height="200" src="https://glama.ai/mcp/servers/m83dfy8feg/badge" alt="Sequential Thinking Server MCP server" /></a>
66

77
## Features
88

9-
- 🧠 **Sequential Problem Solving**: Break down complex problems step-by-step
10-
- 📊 **Progress Tracking**: Monitor thought sequences and branches
9+
- 🧠 **Structured Problem Solving**: Break down complex problems into defined stages
10+
- 📊 **Progress Tracking**: Monitor thought sequences, branches, and revisions
11+
- 🏷️ **Thought Categorization**: Tag and score thoughts for better organization
12+
- 📈 **Dynamic Adaptation**: Adjust the thinking process as new insights emerge
13+
- 📝 **Summary Generation**: Get an overview of the entire thinking process
1114

1215
## Prerequisites
1316

@@ -38,6 +41,12 @@ mcp-sequential-thinking/
3841
uv pip install -e .
3942
```
4043

44+
2. **Run the Server**
45+
```bash
46+
cd mcp_sequential_thinking
47+
uv run server.py
48+
```
49+
4150
## Claude Desktop Integration
4251

4352
Add to your Claude Desktop configuration (`%APPDATA%\Claude\claude_desktop_config.json` on Windows):
@@ -58,13 +67,31 @@ Add to your Claude Desktop configuration (`%APPDATA%\Claude\claude_desktop_confi
5867
}
5968
```
6069

61-
## Development
70+
## API
6271

63-
Test the server manually:
64-
```bash
65-
cd mcp_sequential_thinking
66-
uv run server.py
67-
```
72+
The server exposes two main tools:
73+
74+
### 1. `sequential_thinking`
75+
76+
This tool processes individual thoughts in the sequential thinking process.
77+
78+
Parameters:
79+
- `thought` (str): The content of the current thought
80+
- `thought_number` (int): The sequence number of the current thought
81+
- `total_thoughts` (int): The total number of thoughts expected
82+
- `next_thought_needed` (bool): Whether another thought is needed
83+
- `stage` (str): The current stage of thinking (Problem Definition, Analysis, Ideation, Evaluation, Conclusion)
84+
- `is_revision` (bool, optional): Whether this revises previous thinking
85+
- `revises_thought` (int, optional): Which thought is being reconsidered
86+
- `branch_from_thought` (int, optional): Branching point thought number
87+
- `branch_id` (str, optional): Branch identifier
88+
- `needs_more_thoughts` (bool, optional): If more thoughts are needed
89+
- `score` (float, optional): Score for the thought (0.0 to 1.0)
90+
- `tags` (List[str], optional): List of tags for categorizing the thought
91+
92+
### 2. `get_thinking_summary`
93+
94+
This tool generates a summary of the entire thinking process.
6895

6996
## Troubleshooting
7097

@@ -75,19 +102,6 @@ Common issues:
75102
- Check Claude Desktop logs: `%APPDATA%\Claude\logs`
76103
- Test manual server start
77104

78-
## Parameters
79-
80-
| Parameter | Description | Required |
81-
|-----------|-------------|----------|
82-
| `thought` | Current thinking step | Yes |
83-
| `thought_number` | Step sequence number | Yes |
84-
| `total_thoughts` | Estimated steps needed | Yes |
85-
| `next_thought_needed` | Indicates if more steps required | Yes |
86-
| `is_revision` | Marks thought revision | No |
87-
| `revises_thought` | Identifies thought being revised | No |
88-
| `branch_from_thought` | Starting point for new branch | No |
89-
| `branch_id` | Unique branch identifier | No |
90-
91105
## License
92106

93107
MIT License

mcp_sequential_thinking/server.py

Lines changed: 61 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,24 +1,35 @@
1-
from dataclasses import dataclass
1+
from dataclasses import dataclass, field
22
from typing import Dict, List, Optional, Any
33
import json
4+
from enum import Enum
45
from mcp.server.fastmcp import FastMCP
56
from rich.console import Console
67
from rich.panel import Panel
78
from rich.text import Text
89

910
console = Console(stderr=True)
1011

12+
class ThoughtStage(Enum):
13+
PROBLEM_DEFINITION = "Problem Definition"
14+
ANALYSIS = "Analysis"
15+
IDEATION = "Ideation"
16+
EVALUATION = "Evaluation"
17+
CONCLUSION = "Conclusion"
18+
1119
@dataclass
1220
class ThoughtData:
1321
thought: str
1422
thought_number: int
1523
total_thoughts: int
1624
next_thought_needed: bool
25+
stage: ThoughtStage
1726
is_revision: Optional[bool] = None
1827
revises_thought: Optional[int] = None
1928
branch_from_thought: Optional[int] = None
2029
branch_id: Optional[str] = None
2130
needs_more_thoughts: Optional[bool] = None
31+
score: Optional[float] = None
32+
tags: List[str] = field(default_factory=list)
2233

2334
class SequentialThinkingServer:
2435
def __init__(self):
@@ -31,7 +42,8 @@ def _validate_thought_data(self, input_data: dict) -> ThoughtData:
3142
"thought": str,
3243
"thoughtNumber": int,
3344
"totalThoughts": int,
34-
"nextThoughtNeeded": bool
45+
"nextThoughtNeeded": bool,
46+
"stage": str
3547
}
3648

3749
for field, field_type in required_fields.items():
@@ -40,16 +52,24 @@ def _validate_thought_data(self, input_data: dict) -> ThoughtData:
4052
if not isinstance(input_data[field], field_type):
4153
raise ValueError(f"Invalid type for {field}: expected {field_type}")
4254

55+
try:
56+
stage = ThoughtStage(input_data["stage"])
57+
except ValueError:
58+
raise ValueError(f"Invalid stage: {input_data['stage']}")
59+
4360
return ThoughtData(
4461
thought=input_data["thought"],
4562
thought_number=input_data["thoughtNumber"],
4663
total_thoughts=input_data["totalThoughts"],
4764
next_thought_needed=input_data["nextThoughtNeeded"],
65+
stage=stage,
4866
is_revision=input_data.get("isRevision"),
4967
revises_thought=input_data.get("revisesThought"),
5068
branch_from_thought=input_data.get("branchFromThought"),
5169
branch_id=input_data.get("branchId"),
52-
needs_more_thoughts=input_data.get("needsMoreThoughts")
70+
needs_more_thoughts=input_data.get("needsMoreThoughts"),
71+
score=input_data.get("score"),
72+
tags=input_data.get("tags", [])
5373
)
5474

5575
def _format_thought(self, thought_data: ThoughtData) -> Panel:
@@ -67,11 +87,12 @@ def _format_thought(self, thought_data: ThoughtData) -> Panel:
6787
context = ""
6888
style = "blue"
6989

70-
header = Text(f"{prefix} {thought_data.thought_number}/{thought_data.total_thoughts}{context}", style=style)
90+
header = Text(f"{prefix} {thought_data.thought_number}/{thought_data.total_thoughts} - {thought_data.stage.value}{context}", style=style)
7191
content = Text(thought_data.thought)
92+
footer = Text(f"Score: {thought_data.score:.2f} | Tags: {', '.join(thought_data.tags)}" if thought_data.score is not None else f"Tags: {', '.join(thought_data.tags)}")
7293

7394
return Panel.fit(
74-
content,
95+
Group(content, footer),
7596
title=header,
7697
border_style=style,
7798
padding=(1, 2)
@@ -105,8 +126,11 @@ def process_thought(self, input_data: Any) -> dict:
105126
"thoughtNumber": thought_data.thought_number,
106127
"totalThoughts": thought_data.total_thoughts,
107128
"nextThoughtNeeded": thought_data.next_thought_needed,
129+
"stage": thought_data.stage.value,
108130
"branches": list(self.branches.keys()),
109-
"thoughtHistoryLength": len(self.thought_history)
131+
"thoughtHistoryLength": len(self.thought_history),
132+
"score": thought_data.score,
133+
"tags": thought_data.tags
110134
}, indent=2)
111135
}]
112136
}
@@ -123,6 +147,17 @@ def process_thought(self, input_data: Any) -> dict:
123147
"isError": True
124148
}
125149

150+
def generate_summary(self) -> str:
151+
"""Generate a summary of the thinking process."""
152+
summary = []
153+
for stage in ThoughtStage:
154+
stage_thoughts = [t for t in self.thought_history if t.stage == stage]
155+
if stage_thoughts:
156+
summary.append(f"{stage.value}:")
157+
for thought in stage_thoughts:
158+
summary.append(f" - Thought {thought.thought_number}: {thought.thought[:50]}...")
159+
return "\n".join(summary)
160+
126161
def create_server() -> FastMCP:
127162
"""Create and configure the MCP server."""
128163
mcp = FastMCP("sequential-thinking")
@@ -134,43 +169,58 @@ async def sequential_thinking(
134169
thought_number: int,
135170
total_thoughts: int,
136171
next_thought_needed: bool,
172+
stage: str,
137173
is_revision: Optional[bool] = None,
138174
revises_thought: Optional[int] = None,
139175
branch_from_thought: Optional[int] = None,
140176
branch_id: Optional[str] = None,
141-
needs_more_thoughts: Optional[bool] = None
177+
needs_more_thoughts: Optional[bool] = None,
178+
score: Optional[float] = None,
179+
tags: Optional[List[str]] = None
142180
) -> str:
143-
"""A detailed tool for dynamic and reflective problem-solving through thoughts.
181+
"""An advanced tool for dynamic and reflective problem-solving through structured thoughts.
144182
145183
This tool helps analyze problems through a flexible thinking process that can adapt and evolve.
146-
Each thought can build on, question, or revise previous insights as understanding deepens.
184+
Each thought is categorized into specific stages, can be scored, tagged, and can build on,
185+
question, or revise previous insights as understanding deepens.
147186
148187
Args:
149188
thought: Your current thinking step
150189
thought_number: Current thought number in sequence
151190
total_thoughts: Current estimate of thoughts needed
152191
next_thought_needed: Whether another thought step is needed
192+
stage: The current stage of thinking (Problem Definition, Analysis, Ideation, Evaluation, Conclusion)
153193
is_revision: Whether this revises previous thinking
154194
revises_thought: Which thought is being reconsidered
155195
branch_from_thought: Branching point thought number
156196
branch_id: Branch identifier
157197
needs_more_thoughts: If more thoughts are needed
198+
score: Optional score for the thought (0.0 to 1.0)
199+
tags: Optional list of tags for categorizing the thought
158200
"""
159201
input_data = {
160202
"thought": thought,
161203
"thoughtNumber": thought_number,
162204
"totalThoughts": total_thoughts,
163205
"nextThoughtNeeded": next_thought_needed,
206+
"stage": stage,
164207
"isRevision": is_revision,
165208
"revisesThought": revises_thought,
166209
"branchFromThought": branch_from_thought,
167210
"branchId": branch_id,
168-
"needsMoreThoughts": needs_more_thoughts
211+
"needsMoreThoughts": needs_more_thoughts,
212+
"score": score,
213+
"tags": tags or []
169214
}
170215

171216
result = thinking_server.process_thought(input_data)
172217
return result["content"][0]["text"]
173218

219+
@mcp.tool()
220+
async def get_thinking_summary() -> str:
221+
"""Generate a summary of the entire thinking process."""
222+
return thinking_server.generate_summary()
223+
174224
return mcp
175225

176226
def main():
@@ -180,4 +230,4 @@ def main():
180230

181231
if __name__ == "__main__":
182232
server = create_server()
183-
server.run()
233+
server.run()

0 commit comments

Comments
 (0)