Conversation
Add support for using Podman as an alternative container runtime to Docker. Changes: - Add RUNTIME build arg to Dockerfile.base for conditional Docker installation - Skip Docker daemon setup in entrypoint when DOCKER_HOST is set - Add runtime config option to AgentConfig type - Skip privileged mode and Docker-in-Docker volume for Podman workspaces - Add comprehensive Podman documentation When runtime is set to "podman", workspaces connect to an external container engine via DOCKER_HOST instead of running Docker-in-Docker. Co-Authored-By: Claude (anthropic.claude-sonnet-4-5-20250929-v1:0) <noreply@anthropic.com>
| } | ||
|
|
||
| const containerId = await docker.createContainer({ | ||
| name: containerName, |
This comment was marked as outdated.
This comment was marked as outdated.
Sorry, something went wrong.
There was a problem hiding this comment.
Good catch. Fixed in 37c880e:
getContainerAddress()now checks published ports first for podman runtime, since container IPs are unreachable from the host in rootless podman-in-podman setups- Worker port (7392) is now published on workspace containers for podman runtime, so the fallback has a port to find
The container IP path is kept as a fallback for podman setups where IPs might be reachable (e.g. podman with bridge networking on the same host).
There was a problem hiding this comment.
Resolved in . The worker client no longer uses container IPs or published ports for podman — it routes all communication through inside the container, bypassing the networking layer entirely. This is more reliable than port publishing in rootless podman-in-podman where iptables is unavailable.
src/workspace/manager.ts
Outdated
| { hostPort: sshPort, containerPort: 22, protocol: 'tcp' }, | ||
| ...(isPodman ? [{ hostPort: 7392, containerPort: 7392, protocol: 'tcp' as const }] : []), | ||
| ], |
This comment was marked as outdated.
This comment was marked as outdated.
Sorry, something went wrong.
There was a problem hiding this comment.
Yep, good catch — fixed in 78a3a8f. Worker port now uses dynamic allocation via findAvailablePort() with a dedicated range (7392-7500), same pattern as SSH ports. Multiple podman workspaces can run simultaneously.
There was a problem hiding this comment.
Yep, good catch — fixed in 78a3a8f. Worker port now uses dynamic allocation via findAvailablePort() with a dedicated range (7392-7500), same pattern as SSH ports. Multiple podman workspaces can run simultaneously.
There was a problem hiding this comment.
Good catch — this was fixed in the same force-push that rewrote the branch. Dynamic port allocation via (range 7392-7500) is now used for podman workspaces instead of the hardcoded 7392. Multiple simultaneous workspaces each get a unique host port.
When runtime is 'podman', the worker client now communicates with the worker server inside containers via 'docker exec curl' instead of direct HTTP to container IPs. This is necessary because rootless podman-in-podman containers have IPs in nested network namespaces that are unreachable from the host. Changes: - Add execFetch() helper that uses 'docker exec curl' as HTTP transport - Update createWorkerClient() to accept optional runtime parameter - Add runtime-aware health checks in startWorkerServer() - Thread runtime parameter through session agent functions - Update router to pass runtime from config to worker client calls The Docker runtime path is completely unchanged - all changes are gated behind runtime === 'podman' checks. Co-Authored-By: Claude (anthropic.claude-sonnet-4-5-20250929-v1:0) <noreply@anthropic.com>
…ST in entrypoint - Add runtime field to config loader with 'docker' default - Skip dockerd monitoring in entrypoint when DOCKER_HOST is set These changes complete the Podman runtime support by ensuring the config is properly loaded and the entrypoint doesn't try to manage dockerd when using an external container engine. Co-Authored-By: Claude (anthropic.claude-sonnet-4-5-20250929-v1:0) <noreply@anthropic.com>
78a3a8f to
6020522
Compare
The compiled perry-worker binary bakes in the host's glibc dynamic linker path (e.g. /nix/store/.../ld-linux-x86-64.so.2) which doesn't exist in the Ubuntu workspace container. For podman runtime, copy the JS dist directory and create a bun wrapper at /usr/local/bin/perry instead. Bun is already installed in the workspace image and can run the JS dist directly. The Docker runtime path is unchanged.
| const lines = result.stdout.trim().split('\n'); | ||
| const statusCode = parseInt(lines.pop() || '0', 10); | ||
| const body = lines.join('\n'); | ||
|
|
||
| return { | ||
| ok: statusCode >= 200 && statusCode < 300, | ||
| status: statusCode, | ||
| json: async () => JSON.parse(body), | ||
| text: async () => body, | ||
| }; | ||
| } |
This comment was marked as outdated.
This comment was marked as outdated.
Sorry, something went wrong.
There was a problem hiding this comment.
Valid bug, fixed in . now passes to curl when a timeout is specified, converting from milliseconds to seconds. Health checks will now properly time out instead of hanging indefinitely on unresponsive workers.
The execFetch function accepted a timeout option but never passed it to curl. In podman environments, unresponsive workers could cause health checks to hang indefinitely. Now passes --max-time to curl when timeout is specified, matching the Docker path's AbortController behavior.
| if (isPodman) { | ||
| try { | ||
| const result = await docker.execInContainer( | ||
| containerName, | ||
| ['curl', '-s', '-w', '\\n%{http_code}', `http://localhost:${WORKER_PORT}/health`], | ||
| { user: 'workspace' } | ||
| ); |
There was a problem hiding this comment.
Bug: The Podman health check in startWorkerServer uses curl without a timeout, which can cause workspace startup to hang indefinitely if the worker is unresponsive.
Severity: HIGH
Suggested Fix
Add the --max-time argument to the curl command within the checkHealth function for the Podman path in src/workspace/manager.ts. A value of 1 second (--max-time 1) would make it consistent with the timeout used in the Docker health check path.
Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.
Location: src/workspace/manager.ts#L596-L602
Potential issue: In `src/workspace/manager.ts`, the `checkHealth` function for the
Podman runtime executes a `curl` command to check the worker's health. Unlike the Docker
implementation which uses a 1-second timeout, this `curl` command is executed without a
`--max-time` argument. If the worker server is slow to start or unresponsive, the `curl`
command will hang indefinitely. Since the calling function `docker.execInContainer` also
lacks a timeout mechanism, the entire workspace startup process will be blocked,
preventing the workspace from becoming available.
Rootless Podman Support
Adds support for running Perry with rootless Podman as an alternative to Docker. All changes are gated behind a
runtime: "podman"config option — Docker behavior is completely unchanged.Problem
Perry assumes Docker-in-Docker (DinD) inside workspace containers and communicates with the worker server via container IP networking. In rootless Podman:
--privilegedflag)Changes
Runtime config abstraction (
src/config/loader.ts,src/shared/types.ts)runtime: "docker" | "podman"field inAgentConfig"docker"for backwards compatibilityWorkspace creation (
src/workspace/manager.ts)--privilegedflag,--hostname, and DinD volume for podmanDOCKER_HOSTenv var to workspace containersWorker client communication (
src/worker/client.ts)execFetch()helper that routes HTTP throughdocker exec curlfor podmancreateWorkerClient()accepts optional{ runtime }parameterWorker binary sync (
src/workspace/manager.ts)Session discovery (
src/agent/router.ts,src/sessions/agents/)discoverAllSessions,getAgentSessionDetails,getSessionMessagesWorkspace image (
perry/internal/src/lib/services.ts,perry/internal/src/commands/entrypoint.ts)DOCKER_HOSTis setmonitorServices()skips dockerd health check with external engineDockerfile.basesupports--build-arg RUNTIME=podmanto skip docker-ce installationTesting
Tested end-to-end with rootless Podman-in-Podman (sidecar container via
DOCKER_HOST=tcp://podman-in-podman:2375):Commits
ca65df5— feat: add Podman runtime support (core abstraction)bc82636— refactor: add Podman support to worker client communication6020522— fix: ensure runtime config defaults to 'docker' and respect DOCKER_HOST in entrypointceca2c1— fix: copy JS dist + bun wrapper instead of compiled binary for podmanCloses #159