Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
a57c238
feature gpt-5-codex responses api
caozhiyuan Sep 26, 2025
87899a1
feat: enhance output type for function call and add content conversio…
caozhiyuan Sep 29, 2025
4fc0fa0
refactor: optimize content conversion logic in convertToolResultConte…
caozhiyuan Sep 29, 2025
2b9733b
refactor: remove unused function call output type and simplify respon…
caozhiyuan Sep 30, 2025
505f648
feat: add signature and reasoning handling to responses translation a…
caozhiyuan Sep 30, 2025
9477b45
feat: add signature to thinking messages and enhance reasoning struct…
caozhiyuan Sep 30, 2025
44551f9
refactor: remove summaryIndex from ResponsesStreamState and related h…
caozhiyuan Sep 30, 2025
708ae33
feat: enhance streaming response handling with ping mechanism
caozhiyuan Sep 30, 2025
47fb3e4
feat: responses translation add cache_read_input_tokens
caozhiyuan Oct 1, 2025
2800ed3
feat: enhance response event handling with event types and improved p…
caozhiyuan Oct 5, 2025
619d482
feat: improve event log and enhance reasoning content handling by add…
caozhiyuan Oct 7, 2025
5c6e4c6
1.fix claude code 2.0.28 warmup request consume premium request, forc…
caozhiyuan Oct 7, 2025
32cb10a
fix: the cluade code small model where max_tokens is only 512, which …
caozhiyuan Nov 3, 2025
9051a21
feat: add model reasoning efforts configuration and integrate into me…
caozhiyuan Nov 3, 2025
eeeb820
fix: ensure application directory is created when config file is missing
caozhiyuan Nov 3, 2025
3f69f13
feat: consola file logger for handler.ts
caozhiyuan Oct 29, 2025
4c0d775
fix: copolit function call returning infinite line breaks until max_t…
caozhiyuan Oct 30, 2025
1ec12db
feat: add verbose logging configuration to enhance log detail level
caozhiyuan Nov 3, 2025
174e868
fix: update verbose property to be required in State interface and ad…
caozhiyuan Nov 3, 2025
83cdfde
Merge remote-tracking branch 'remotes/origin/master' into feature/res…
caozhiyuan Nov 3, 2025
6f47926
fix: correct typo in warning message and refine whitespace handling l…
caozhiyuan Nov 6, 2025
01d4adb
fix: update token counting logic for GPT and Claude and Grok models, …
caozhiyuan Nov 10, 2025
3cdc32c
fix: extend whitespace handling in updateWhitespaceRunState to includ…
caozhiyuan Nov 19, 2025
f7835a4
Remove incompatible with copilot responses `service_tier` field (#45)
sr-tream Nov 22, 2025
318855e
feat(config): enhance model configuration with automatic defaults and…
caozhiyuan Dec 5, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 28 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -177,6 +177,28 @@ The following command line options are available for the `start` command:
| ------ | ------------------------- | ------- | ----- |
| --json | Output debug info as JSON | false | none |

## Configuration (config.json)

- **Location:** `~/.local/share/copilot-api/config.json` (Linux/macOS) or `%USERPROFILE%\.local\share\copilot-api\config.json` (Windows).
- **Default shape:**
```json
{
"extraPrompts": {
"gpt-5-mini": "<built-in exploration prompt>",
"gpt-5.1-codex-max": "<built-in exploration prompt>"
},
"smallModel": "gpt-5-mini",
"modelReasoningEfforts": {
"gpt-5-mini": "low"
}
}
```
- **extraPrompts:** Map of `model -> prompt` appended to the first system prompt when translating Anthropic-style requests to Copilot. Use this to inject guardrails or guidance per model. Missing default entries are auto-added without overwriting your custom prompts.
- **smallModel:** Fallback model used for tool-less warmup messages (e.g., Claude Code probe requests) to avoid spending premium requests; defaults to `gpt-5-mini`.
- **modelReasoningEfforts:** Per-model `reasoning.effort` sent to the Copilot Responses API. Allowed values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`. If a model isn’t listed, `high` is used by default.

Edit this file to customize prompts or swap in your own fast model. Restart the server (or rerun the command) after changes so the cached config is refreshed.

## API Endpoints

The server exposes several endpoints to interact with the Copilot API. It provides OpenAI-compatible endpoints and now also includes support for Anthropic-compatible endpoints, allowing for greater flexibility with different tools and services.
Expand All @@ -185,11 +207,12 @@ The server exposes several endpoints to interact with the Copilot API. It provid

These endpoints mimic the OpenAI API structure.

| Endpoint | Method | Description |
| --------------------------- | ------ | --------------------------------------------------------- |
| `POST /v1/chat/completions` | `POST` | Creates a model response for the given chat conversation. |
| `GET /v1/models` | `GET` | Lists the currently available models. |
| `POST /v1/embeddings` | `POST` | Creates an embedding vector representing the input text. |
| Endpoint | Method | Description |
| --------------------------- | ------ | ---------------------------------------------------------------- |
| `POST /v1/responses` | `POST` | OpenAI Most advanced interface for generating model responses. |
Copy link

Copilot AI Dec 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The README table formatting is inconsistent. Line 212 has "OpenAI Most advanced interface" which appears to be missing proper capitalization and could be clearer. Consider: "OpenAI's most advanced interface for generating model responses" or simply "Generates model responses using the Responses API".

Suggested change
| `POST /v1/responses` | `POST` | OpenAI Most advanced interface for generating model responses. |
| `POST /v1/responses` | `POST` | OpenAI's most advanced interface for generating model responses. |

Copilot uses AI. Check for mistakes.
| `POST /v1/chat/completions` | `POST` | Creates a model response for the given chat conversation. |
| `GET /v1/models` | `GET` | Lists the currently available models. |
| `POST /v1/embeddings` | `POST` | Creates an embedding vector representing the input text. |

### Anthropic Compatible Endpoints

Expand Down
142 changes: 142 additions & 0 deletions src/lib/config.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,142 @@
import consola from "consola"
import fs from "node:fs"

import { PATHS } from "./paths"

export interface AppConfig {
extraPrompts?: Record<string, string>
smallModel?: string
modelReasoningEfforts?: Record<
string,
"none" | "minimal" | "low" | "medium" | "high" | "xhigh"
>
}

const gpt5ExplorationPrompt = `## Exploration and reading files
- **Think first.** Before any tool call, decide ALL files/resources you will need.
- **Batch everything.** If you need multiple files (even from different places), read them together.
- **multi_tool_use.parallel** Use multi_tool_use.parallel to parallelize tool calls and only this.
- **Only make sequential calls if you truly cannot know the next file without seeing a result first.**
- **Workflow:** (a) plan all needed reads → (b) issue one parallel batch → (c) analyze results → (d) repeat if new, unpredictable reads arise.`

const defaultConfig: AppConfig = {
extraPrompts: {
"gpt-5-mini": gpt5ExplorationPrompt,
"gpt-5.1-codex-max": gpt5ExplorationPrompt,
},
smallModel: "gpt-5-mini",
modelReasoningEfforts: {
"gpt-5-mini": "low",
},
}

let cachedConfig: AppConfig | null = null

function ensureConfigFile(): void {
try {
fs.accessSync(PATHS.CONFIG_PATH, fs.constants.R_OK | fs.constants.W_OK)
} catch {
fs.mkdirSync(PATHS.APP_DIR, { recursive: true })
fs.writeFileSync(
PATHS.CONFIG_PATH,
`${JSON.stringify(defaultConfig, null, 2)}\n`,
"utf8",
)
try {
fs.chmodSync(PATHS.CONFIG_PATH, 0o600)
} catch {
Copy link

Copilot AI Dec 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The file permissions are set to 0o600 (owner read/write only) for the config file, which is good for security. However, if chmodSync fails, the function silently returns without logging or throwing an error. This could leave the config file with overly permissive permissions. Consider at least logging a warning if the chmod operation fails.

Suggested change
} catch {
} catch (error) {
consola.warn(
`Failed to set secure permissions (0o600) on config file at ${PATHS.CONFIG_PATH}. File may have overly permissive permissions.`,
error
)

Copilot uses AI. Check for mistakes.
return
}
}
}

function readConfigFromDisk(): AppConfig {
ensureConfigFile()
try {
const raw = fs.readFileSync(PATHS.CONFIG_PATH, "utf8")
if (!raw.trim()) {
fs.writeFileSync(
PATHS.CONFIG_PATH,
`${JSON.stringify(defaultConfig, null, 2)}\n`,
"utf8",
)
return defaultConfig
}
return JSON.parse(raw) as AppConfig
} catch (error) {
consola.error("Failed to read config file, using default config", error)
return defaultConfig
}
}

function mergeDefaultExtraPrompts(config: AppConfig): {
mergedConfig: AppConfig
changed: boolean
} {
const extraPrompts = config.extraPrompts ?? {}
const defaultExtraPrompts = defaultConfig.extraPrompts ?? {}

const missingExtraPromptModels = Object.keys(defaultExtraPrompts).filter(
(model) => !Object.hasOwn(extraPrompts, model),
)

if (missingExtraPromptModels.length === 0) {
return { mergedConfig: config, changed: false }
}

return {
mergedConfig: {
...config,
extraPrompts: {
...defaultExtraPrompts,
...extraPrompts,
},
},
changed: true,
}
}

export function mergeConfigWithDefaults(): AppConfig {
const config = readConfigFromDisk()
const { mergedConfig, changed } = mergeDefaultExtraPrompts(config)

if (changed) {
try {
fs.writeFileSync(
PATHS.CONFIG_PATH,
`${JSON.stringify(mergedConfig, null, 2)}\n`,
"utf8",
)
} catch (writeError) {
consola.warn(
"Failed to write merged extraPrompts to config file",
writeError,
)
}
}

cachedConfig = mergedConfig
return mergedConfig
}

export function getConfig(): AppConfig {
cachedConfig ??= readConfigFromDisk()
return cachedConfig
}

export function getExtraPromptForModel(model: string): string {
const config = getConfig()
return config.extraPrompts?.[model] ?? ""
}

export function getSmallModel(): string {
const config = getConfig()
return config.smallModel ?? "gpt-5-mini"
}

export function getReasoningEffortForModel(
model: string,
): "none" | "minimal" | "low" | "medium" | "high" | "xhigh" {
const config = getConfig()
return config.modelReasoningEfforts?.[model] ?? "high"
}
182 changes: 182 additions & 0 deletions src/lib/logger.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,182 @@
import consola, { type ConsolaInstance } from "consola"
import fs from "node:fs"
import path from "node:path"
import util from "node:util"

import { PATHS } from "./paths"
import { state } from "./state"

const LOG_RETENTION_DAYS = 7
const LOG_RETENTION_MS = LOG_RETENTION_DAYS * 24 * 60 * 60 * 1000
const CLEANUP_INTERVAL_MS = 24 * 60 * 60 * 1000
const LOG_DIR = path.join(PATHS.APP_DIR, "logs")
const FLUSH_INTERVAL_MS = 1000
const MAX_BUFFER_SIZE = 100

const logStreams = new Map<string, fs.WriteStream>()
const logBuffers = new Map<string, Array<string>>()

const ensureLogDirectory = () => {
if (!fs.existsSync(LOG_DIR)) {
fs.mkdirSync(LOG_DIR, { recursive: true })
}
}

const cleanupOldLogs = () => {
if (!fs.existsSync(LOG_DIR)) {
return
}

const now = Date.now()

for (const entry of fs.readdirSync(LOG_DIR)) {
const filePath = path.join(LOG_DIR, entry)

let stats: fs.Stats
try {
stats = fs.statSync(filePath)
} catch {
continue
}

if (!stats.isFile()) {
continue
}

if (now - stats.mtimeMs > LOG_RETENTION_MS) {
try {
fs.rmSync(filePath)
} catch {
continue
}
}
}
}

const formatArgs = (args: Array<unknown>) =>
args
.map((arg) =>
typeof arg === "string" ? arg : (
util.inspect(arg, { depth: null, colors: false })
),
)
.join(" ")

const sanitizeName = (name: string) => {
const normalized = name
.toLowerCase()
.replaceAll(/[^a-z0-9]+/g, "-")
.replaceAll(/^-+|-+$/g, "")

return normalized === "" ? "handler" : normalized
}

const getLogStream = (filePath: string): fs.WriteStream => {
let stream = logStreams.get(filePath)
if (!stream || stream.destroyed) {
stream = fs.createWriteStream(filePath, { flags: "a" })
logStreams.set(filePath, stream)

stream.on("error", (error: unknown) => {
console.warn("Log stream error", error)
logStreams.delete(filePath)
})
}
return stream
}

const flushBuffer = (filePath: string) => {
const buffer = logBuffers.get(filePath)
if (!buffer || buffer.length === 0) {
return
}

const stream = getLogStream(filePath)
const content = buffer.join("\n") + "\n"
stream.write(content, (error) => {
if (error) {
console.warn("Failed to write handler log", error)
}
})

logBuffers.set(filePath, [])
}

const flushAllBuffers = () => {
for (const filePath of logBuffers.keys()) {
flushBuffer(filePath)
}
}

const appendLine = (filePath: string, line: string) => {
let buffer = logBuffers.get(filePath)
if (!buffer) {
buffer = []
logBuffers.set(filePath, buffer)
}

buffer.push(line)

if (buffer.length >= MAX_BUFFER_SIZE) {
flushBuffer(filePath)
}
}

setInterval(flushAllBuffers, FLUSH_INTERVAL_MS)

const cleanup = () => {
flushAllBuffers()
for (const stream of logStreams.values()) {
stream.end()
}
logStreams.clear()
logBuffers.clear()
}

process.on("exit", cleanup)
process.on("SIGINT", () => {
cleanup()
process.exit(0)
})
process.on("SIGTERM", () => {
cleanup()
process.exit(0)
})

let lastCleanup = 0

export const createHandlerLogger = (name: string): ConsolaInstance => {
ensureLogDirectory()

const sanitizedName = sanitizeName(name)
const instance = consola.withTag(name)

if (state.verbose) {
instance.level = 5
}
instance.setReporters([])

instance.addReporter({
log(logObj) {
ensureLogDirectory()

if (Date.now() - lastCleanup > CLEANUP_INTERVAL_MS) {
cleanupOldLogs()
lastCleanup = Date.now()
}
Comment on lines +163 to +166
Copy link

Copilot AI Dec 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cleanup interval tracking uses a module-level lastCleanup variable that's shared across all logger instances. This could lead to race conditions where multiple loggers try to clean up simultaneously, or one logger prevents others from cleaning up on time. Consider moving this to per-instance tracking or using a mutex/lock.

Copilot uses AI. Check for mistakes.

const date = logObj.date
const dateKey = date.toLocaleDateString("sv-SE")
const timestamp = date.toLocaleString("sv-SE", { hour12: false })
const filePath = path.join(LOG_DIR, `${sanitizedName}-${dateKey}.log`)
const message = formatArgs(logObj.args as Array<unknown>)
const line = `[${timestamp}] [${logObj.type}] [${logObj.tag || name}]${
message ? ` ${message}` : ""
}`

appendLine(filePath, line)
},
})

return instance
}
3 changes: 3 additions & 0 deletions src/lib/paths.ts
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,18 @@ import path from "node:path"
const APP_DIR = path.join(os.homedir(), ".local", "share", "copilot-api")

const GITHUB_TOKEN_PATH = path.join(APP_DIR, "github_token")
const CONFIG_PATH = path.join(APP_DIR, "config.json")

export const PATHS = {
APP_DIR,
GITHUB_TOKEN_PATH,
CONFIG_PATH,
}

export async function ensurePaths(): Promise<void> {
await fs.mkdir(PATHS.APP_DIR, { recursive: true })
await ensureFile(PATHS.GITHUB_TOKEN_PATH)
await ensureFile(PATHS.CONFIG_PATH)
}

async function ensureFile(filePath: string): Promise<void> {
Expand Down
Loading