Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 2 additions & 5 deletions conf/claude-local-marketplace/skills/local-research/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,13 +29,10 @@ Use this skill when:
2. **Pick only one file(no read)**: Match user keywords to research names/content to find the most relevant research file, do not read, ask user to confirm if is this one, and list other possible match files if any.
3. User choose with file index, then you proceed to read the file.

Alternative way to fast load it:

1. `cd ~/workspace/llm/research/ && sgrep index`
2. `cd ~/workspace/llm/research/ && sgrep search <keywords-or-user-input...>`

Read the results and load the correct research file.

You can use `rg` to grep in the folder with multiple possible keywords to locate the file.

### When user requests new research to be created explicitly

1. **Generate Research Name**: Create descriptive research name based on user input as `<user-query>`, user input may contain typos, improve it.
Expand Down
5 changes: 4 additions & 1 deletion conf/claude-local-marketplace/skills/recent-history/SKILL.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,8 @@ description: "This skill should be used when retrieving recent chat history, in

## Step 1: Prepare the history file

This command should run in current project root dir.
If there is <last-session> in your system prompt, just use it, then go to Step 2.
Otherwise, run this command which should run in current project root dir.

```bash
bunx repomix --header-text "file path parent dir: .claude/sessions/" --quiet --no-dot-ignore --no-gitignore --no-git-sort-by-changes --no-file-summary --no-directory-structure .claude/sessions/ -o .claude/sessions/history.xml >/dev/null 2>&1 && echo "history exist" || echo "no history file"
Expand All @@ -17,6 +18,8 @@ If the command output is "no history file", just quit, because there are no sess

## Step 2: Query history (sessions are ordered old→new in the file)

If user intended meant for last session and our <last-session> context is sufficient, just focus on <last-session>

### Quick Overview: List all sessions with line numbers

```bash
Expand Down
3 changes: 3 additions & 0 deletions conf/llm/docs/coding-rules.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,8 @@
- **Precedent:** Follow prior implementations for new features unless told otherwise.
- **Structured Plan:** For each step, specify target files and exact required changes.
- **Boundaries:** Keep business logic isolated from UI. Place demo/mock code at the top layer. Don't modify production code just for debugging.
- **Separation of concerns:** Bad: Use other layer's implement details in current layer that current layer should not know or care about. Good: current layer use it's own knowledge based implement details, dependant layers can depends on that to derive implement details.
- **Flexible consistency enforcement:**: Only enforce consistency if it not violate separate of concerns.
- **Abstraction:** Only use explicitly exposed abstractions from the immediate downstream layer—avoid private APIs, even for reuse.
- **Fail Fast:** Let bugs surface; do not mask errors with `try-catch` or optional chaining.
- **Comment Intent:** Use `FIXME`, `TODO`, and `NOTE` to flag issues, explain logic, document changes, and note trade-offs.
Expand All @@ -35,6 +37,7 @@
- **Avoid introduce implement complexity:** No backward compatibility layers, feature flags, or toggles unless explicitly requested.
- **No external data based design:** Avoid designs relying on external data, for example, use external api data to determine program logic or control flow, it will broke when external data changes.
- **Avoid outdated dependency:** Use the latest stable version of dependencies unless there is a specific reason to use an older version. This is important to avoid big refactor later.
- **No Weak Test:**: Disallow tests that are meaningless to implemented code, or that do not effectively validate the intended functionality. Bad: Test to verify "id should not start with number", the test simply construct a string without number and assert with that, does not involve any implemented code.

When editing code: (1) state your assumptions, (2) create/run minimal tests if possible, (3) generate diffs ready for review, (4) follow repository style.

Expand Down
2 changes: 1 addition & 1 deletion nix/hm/ai/claude/assets/CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
# More instructions

- At the start, list a high-level checklist (3–7 bullets) of conceptual steps—omit implementation details, don't confuse this with Todo tools, you can use the checklist method when working on a todo item.
- Use `recent-history` Skill if you need more context of what user are talking about at the start. you can delegate this to general subagent to summarize recent conversation.
- Read <last-session> or Use `recent-history` Skill if you need more context of what user are talking about at the start. you can delegate this to general subagent to summarize recent conversation.
- Do not use the Plan tools, just plan without the Plan tools, propsose your plan to user directly and wait for confirmation.
- **Critical**: When construct "Prompt" for subagents or Task tool, explicitly give instructions about subagent role and limitation on what can not do based on their definition, for example, "do not run install command and edit files", "Only search for file and code snippet location, do not run debug commands" when using the Explore subagent.

Expand Down
14 changes: 11 additions & 3 deletions nix/hm/ai/claude/hooks/session_save.py
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@ def generate_summary(messages: list, cwd: str) -> str:
return ""


def save_summary(cwd: str, summary: str, timestamp: str, session_id: str):
def save_summary(cwd: str, summary: str, timestamp: str, session_id: str, session_file: str = ""):
"""Save summary to session-summary directory with session ID.

NOTE: This function is paired with `session_summary.py` which reads these files
Expand All @@ -241,6 +241,7 @@ def save_summary(cwd: str, summary: str, timestamp: str, session_id: str):
summary: Summary text
timestamp: Timestamp string (YYYYMMDD-HHMMSS)
session_id: Session ID
session_file: Path to the full session file (relative to cwd)
"""
if not summary or not session_id:
return
Expand All @@ -257,10 +258,15 @@ def save_summary(cwd: str, summary: str, timestamp: str, session_id: str):
# Use existing file or create new one with timestamp
summary_file = existing_file or summary_dir / f"{timestamp}-summary-ID_{session_id}.md"

# Append session file reference if available
content = summary
if session_file:
content = f"{summary}\n\nFull session: {session_file}"

try:
# Always overwrite summary (update, don't append)
with open(summary_file, "w") as f:
f.write(summary)
f.write(content)
os.chmod(summary_file, 0o600)
print(f"✓ Summary saved to {summary_file.name}", file=sys.stderr)
except Exception as e:
Expand Down Expand Up @@ -322,7 +328,9 @@ def save_session(cwd: str, messages: list, session_id: str, reason: str = ""):
# Generate and save summary for next session (only on "clear" reason)
if reason == "clear":
summary = generate_summary(messages, cwd)
save_summary(cwd, summary, timestamp, session_id)
# Pass relative session file path
session_file_rel = str(filepath.relative_to(cwd)) if filepath else ""
save_summary(cwd, summary, timestamp, session_id, session_file_rel)
elif reason:
print(f"⊘ Summary skipped (reason='{reason}', need 'clear')", file=sys.stderr)

Expand Down
6 changes: 3 additions & 3 deletions nix/hm/ai/claude/hooks/session_summary.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,12 +79,12 @@ def main():
sys.exit(0)

# Return as additional context for new session
context = f"""## Previous Session Context
context = f"""<last-session>

{summary}"""
{summary}</last-session>"""

# Create a visible system message
system_msg = f"📝 Previous session: {summary}"
system_msg = f"History context loaded"

output = {
"systemMessage": system_msg,
Expand Down
5 changes: 1 addition & 4 deletions nix/hm/ai/claude/settings.json
Original file line number Diff line number Diff line change
Expand Up @@ -151,6 +151,7 @@
"WebFetch",
"WebSearch",
"Read(./.env)",
"Read(/dev/**)",
"Read(./.env.*)",
"Read(.private.*)",
"Read(./secrets/**)",
Expand Down Expand Up @@ -205,10 +206,6 @@
{
"type": "command",
"command": "uv run ~/.dotfiles/nix/hm/ai/claude/hooks/session_summary.py"
},
{
"type": "command",
"command": "uv run ~/.dotfiles/nix/hm/ai/claude/hooks/session_start_handoff.py"
}
]
}
Expand Down
95 changes: 48 additions & 47 deletions nix/hm/ai/codex/default.nix
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,11 @@
}:
let
proxyConfig = import ../../../lib/proxy.nix { inherit lib pkgs; };
mcp = import ../../../modules/ai/mcp.nix { inherit pkgs lib config; };
codex_home = "${config.xdg.configHome}/codex";
codexMcpToml = builtins.readFile (
(pkgs.formats.toml { }).generate "codex-mcp.toml" { mcp_servers = mcp.clients.codex; }
);
# codex_config_file = "${codex_home}/config.toml";
# like commands in other agents
# prompts_dir = "${codex_home}/prompts";
Expand All @@ -30,15 +34,34 @@ in
source = ./instructions;
recursive = true;
};
"codex/skills" = {
source = ../../../../conf/claude-local-marketplace/skills;
recursive = true;
};
# toml
"codex/config.toml".text = ''
model = "gpt-5"
model_provider = "litellm"
"codex/config-generated.toml".text = ''
model = "gpt-5.2-medium"
model_provider = "packy"
approval_policy = "untrusted"
model_reasoning_effort = "low"
model_reasoning_effort = "medium"
# the AGENTS.md contains instructions for using codex mcp, do not use it
# experimental_instructions_file = "${config.xdg.configHome}/AGENTS.md"
sandbox_mode = "read-only"
project_doc_fallback_filenames = ["CLAUDE.md"]
sandbox_mode = "workspace-write"

[features]
tui2 = true
skills = true
unified_exec = true
apply_patch_freeform = true
view_image_tool = false
ghost_commit = false

[model_providers.packy]
name = "packy"
wire_api = "responses"
base_url = "https://www.packyapi.com/v1"
env_key = "PACKYCODE_CODEX_API_KEY"

[model_providers.litellm]
name = "litellm"
Expand Down Expand Up @@ -104,31 +127,17 @@ in
hide_agent_reasoning = true
model_verbosity = "low"

[profiles.sage_slow]
model = "glm-4.6"
model_provider = "zhipuai-coding-plan"
sandbox_mode = "read-only"
experimental_instructions_file = "${codex_home}/instructions/sage-role.md"
approval_policy = "never"
model_reasoning_effort = "medium"
model_reasoning_summary = "concise"
hide_agent_reasoning = true
model_verbosity = "low"

[profiles.sage]
model = "kimi-k2-turbo-preview"
model_provider = "moonshot"
sandbox_mode = "read-only"
experimental_instructions_file = "${codex_home}/instructions/sage-role.md"
approval_policy = "never"
model_reasoning_effort = "low"
model_reasoning_summary = "concise"
hide_agent_reasoning = true
model_verbosity = "medium"

[tui]
# notifications = [ "agent-turn-complete", "approval-requested" ]
notifications = true
animations = false
scroll_events_per_tick = 3
scroll_wheel_lines = 3
scroll_mode = "auto"

[sandbox_workspace_write]
network_access = true
writable_roots = ["${config.home.homeDirectory}/workspace/work"]

[shell_environment_policy]
inherit = "core"
Expand All @@ -140,26 +149,18 @@ in
set = { HTTP_PROXY = "${proxyConfig.proxies.http}", HTTPS_PROXY = "${proxyConfig.proxies.https}" }

## MCP
[mcp_servers.chromedev]
command = "bunx"
args = ["chrome-devtools-mcp@latest", "--browser-url=http://127.0.0.1:9222"]

# [mcp_servers.context7]
# command = "bunx"
# args = ["@upstash/context7-mcp"]

# [mcp_servers.mermaid]
# command = "bunx"
# args = ["@devstefancho/mermaid-mcp"]

# [mcp_servers.sequentialthinking]
# command = "bunx"
# args = ["@modelcontextprotocol/server-sequential-thinking"]

# [mcp_servers.github]
# command = "github-mcp-server"
# args = ["stdio", "--dynamic-toolsets"]
# env = { GITHUB_PERSONAL_ACCESS_TOKEN = "${pkgs.nix-priv.keys.github.accessToken}" }
${codexMcpToml}
'';
};

home.activation = {
setupCodexConfig = lib.hm.dag.entryAfter [ "writeBoundary" ] ''
CODEX_HOME="${codex_home}"

cp -f ${codex_home}/config-generated.toml "${codex_home}/config.toml"
chmod u+w "${codex_home}/config.toml"

cat ${../../../../conf/llm/docs/coding-rules.md} > ${codex_home}/AGENTS.md
'';
};
}
36 changes: 0 additions & 36 deletions nix/hm/ai/codex/instructions/sage-role.md

This file was deleted.

2 changes: 1 addition & 1 deletion nix/hm/ai/default.nix
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
./legacy.nix
./claude
./codex
./forge
# ./forge
# ./windsurf
# ./cline
./droid
Expand Down
34 changes: 17 additions & 17 deletions nix/hm/litellm/bender-muffin.nix
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@
api_key = pkgs.nix-priv.keys.zenmux.apiKey;
use_in_pass_through = true;
max_tokens = 64000;
rpm = 3;
rpm = 1;
};
model_info = {
max_output_tokens = 64000;
Expand All @@ -42,7 +42,7 @@
api_key = pkgs.nix-priv.keys.zenmux.apiKey;
max_tokens = 64000;
use_in_pass_through = true;
rpm = 2;
rpm = 1;
};
model_info = {
max_output_tokens = 64000;
Expand Down Expand Up @@ -74,19 +74,19 @@
# max_output_tokens = 16000;
# };
# }
# {
# model_name = "bender-muffin";
# litellm_params = {
# model = "openai/minimax-m2";
# api_key = pkgs.nix-priv.keys.minimax.codingPlanApiKey;
# api_base = "https://api.minimaxi.com/v1";
# max_tokens = 128000;
# rpm = 2;
# };
# model_info = {
# max_output_tokens = 128000;
# };
# }
{
model_name = "bender-muffin";
litellm_params = {
model = "anthropic/MiniMax-M2.1";
api_key = pkgs.nix-priv.keys.minimax.codingPlanApiKey;
api_base = "https://api.minimaxi.com/anthropic";
max_tokens = 1000;
rpm = 5;
};
model_info = {
max_output_tokens = 128000;
};
}
# {
# model_name = "bender-muffin";
# litellm_params = {
Expand Down Expand Up @@ -133,11 +133,11 @@
# {
# model_name = "bender-muffin";
# litellm_params = {
# model = "openai/qwen3-coder";
# model = "openai/glm-4.7-free";
# api_base = "https://opencode.ai/zen/v1";
# api_key = pkgs.nix-priv.keys.opencode.apiKey;
# max_tokens = 65536;
# rpm = 5;
# rpm = 3;
# };
# model_info = {
# max_output_tokens = 65536;
Expand Down
Loading
Loading