Skip to content

[macos] gvproxy listens to ports of exited containers #28047

@xverges

Description

@xverges

Issue Description

podman port -all shows less ports than sudo lsof -i -P | grep LISTEN | grep gvproxy

The extra ports correspond to a container that has exited. Exit probably happened while the laptop was sleeping.

% podman port --all
0288003e4b67	6379/tcp -> 0.0.0.0:6379
b07a3ef56677	9411/tcp -> 0.0.0.0:9411
d2cae48bf699	50005/tcp -> 0.0.0.0:50005
d2cae48bf699	8080/tcp -> 0.0.0.0:58080
d2cae48bf699	9090/tcp -> 0.0.0.0:59090

% sudo lsof -i -P | grep LISTEN | grep proxy
gvproxy   41394          xavier    9u  IPv4 0x93afc67c8a48b941      0t0    TCP localhost:50451 (LISTEN)
gvproxy   41394          xavier   19u  IPv6 0xd28f508e31ad9f81      0t0    TCP *:2379 (LISTEN)
gvproxy   41394          xavier   22u  IPv6  0x18d70602dd655c1      0t0    TCP *:50006 (LISTEN)
gvproxy   41394          xavier   24u  IPv6 0x10404d47b4d11cde      0t0    TCP *:58081 (LISTEN)
gvproxy   41394          xavier   25u  IPv6 0x7ad294355ff8b4b6      0t0    TCP *:59091 (LISTEN)
gvproxy   41394          xavier   27u  IPv6 0x61040aaa4f345002      0t0    TCP *:9411 (LISTEN)
gvproxy   41394          xavier   28u  IPv6 0x162dcca22bf5a199      0t0    TCP *:50005 (LISTEN)
gvproxy   41394          xavier   32u  IPv6 0xfe5862aac7137f8b      0t0    TCP *:58080 (LISTEN)
gvproxy   41394          xavier   33u  IPv6 0x633385058f59f4fe      0t0    TCP *:59090 (LISTEN)
gvproxy   41394          xavier   40u  IPv6 0xdb44eca04e1f2302      0t0    TCP *:6379 (LISTEN)

% podman ps -a
CONTAINER ID  IMAGE                                  COMMAND               CREATED         STATUS                        PORTS                                                                                               NAMES
...
c9535cc1236f  docker.io/daprio/dapr:1.16.8           --etcd-data-dir=/...  57 minutes ago  Exited (1) 15 minutes ago     0.0.0.0:2379->2379/tcp, 0.0.0.0:50006->50006/tcp, 0.0.0.0:58081->8080/tcp, 0.0.0.0:59091->9090/tcp  dapr_scheduler
....

Steps to reproduce the issue

Steps to reproduce the issue. Not 100% reliable.

  1. podman --log-level=debug run --name dapr_scheduler --entrypoint ./scheduler --volume dapr_scheduler:/var/lock -p 50006:50006 -p 2379:2379 -p 58081:8080 -p 59091:9090 docker.io/daprio/dapr:1.16.8 --etcd-data-dir=/var/lock/dapr/scheduler
  2. Sleep the mac.
  3. Wake it up after a while

Describe the results you received

The container will have exited. No trace of its ports in podman port --all but the port will still be listed in sudo lsof -i -P | grep LISTEN | grep proxy. If you try to remove the container and re-run the command above, you'll get a proxy already running error.

Describe the results you expected

The ports are free.

podman info output

podman info
Client:
  APIVersion: 5.7.1
  BuildOrigin: pkginstaller
  Built: 1765378539
  BuiltTime: Wed Dec 10 15:55:39 2025
  GitCommit: f845d14e941889ba4c071f35233d09b29d363c75
  GoVersion: go1.25.5
  Os: darwin
  OsArch: darwin/amd64
  Version: 5.7.1
host:
  arch: amd64
  buildahVersion: 1.42.2
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.13-2.fc43.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.13, commit: '
  cpuUtilization:
    idlePercent: 99.19
    systemPercent: 0.56
    userPercent: 0.25
  cpus: 8
  databaseBackend: sqlite
  distribution:
    distribution: fedora
    variant: coreos
    version: "43"
  emulatedArchitectures:
  - linux/arm64
  - linux/arm64be
  eventLogger: journald
  freeLocks: 2041
  hostname: localhost.localdomain
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 1000000
    uidmap:
    - container_id: 0
      host_id: 501
      size: 1
    - container_id: 1
      host_id: 100000
      size: 1000000
  kernel: 6.17.7-300.fc43.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 20126547968
  memTotal: 21471559680
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.17.0-1.fc43.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.17.0
    package: netavark-1.17.1-1.fc43.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.17.1
  ociRuntime:
    name: crun
    package: crun-1.24-1.fc43.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.24
      commit: 54693209039e5e04cbe3c8b1cd5fe2301219f0a1
      rundir: /run/user/501/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/sbin/pasta
    package: passt-0^20250919.g623dbf6-1.fc43.x86_64
    version: |
      pasta 0^20250919.g623dbf6-1.fc43.x86_64
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: unix:///run/user/501/podman/podman.sock
  rootlessNetworkCmd: pasta
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: true
  slirp4netns:
    executable: /usr/sbin/slirp4netns
    package: slirp4netns-1.3.1-3.fc43.x86_64
    version: |-
      slirp4netns version 1.3.1
      commit: e5e368c4f5db6ae75c2fce786e31eef9da6bf236
      libslirp: 4.9.1
      SLIRP_CONFIG_VERSION_MAX: 6
      libseccomp: 2.6.0
  swapFree: 0
  swapTotal: 0
  uptime: 3h 20m 34.00s (Approximately 0.12 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - docker.io
store:
  configFile: /var/home/core/.config/containers/storage.conf
  containerStore:
    number: 5
    paused: 0
    running: 3
    stopped: 2
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/home/core/.local/share/containers/storage
  graphRootAllocated: 99252940800
  graphRootUsed: 6993928192
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 7
  runRoot: /run/user/501/containers
  transientStore: false
  volumePath: /var/home/core/.local/share/containers/storage/volumes
version:
  APIVersion: 5.7.1
  BuildOrigin: 'Copr: packit/containers-podman-27732'
  Built: 1765238400
  BuiltTime: Tue Dec  9 01:00:00 2025
  GitCommit: f845d14e941889ba4c071f35233d09b29d363c75
  GoVersion: go1.25.4 X:nodwarf5
  Os: linux
  OsArch: linux/amd64
  Version: 5.7.1

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

Additional environment details

Additional environment details

Additional information

Additional information like issue happens only occasionally or issue happens with a particular architecture or on a particular setting

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/bugCategorizes issue or PR as related to a bug.macosMacOS (OSX) relatedremoteProblem is in podman-remote

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions