Skip to content

Feature Request: Allow Master-Free Mode (Headless Dispatch Only) #48

@IT-BillDeng

Description

@IT-BillDeng

🚀 Feature Request: Allow Master-Free Mode (Headless Dispatch Only)

Environment Overview

  • Local machine: NVIDIA RTX 5080 (single GPU), used mainly for workflow design and testing.

  • Remote server: 4 × RTX 2080 Ti 22 GB cards, running as distributed Workers for heavy generation tasks.

  • Deployment: Docker-based ComfyUI-Distributed setup.

    • Local PC runs as current “Master”.
    • Remote server runs 4–5 Workers (--listen 0.0.0.0 --enable-cors-header).

🧩 Current Behavior

At present, ComfyUI-Distributed requires the Master process to remain active and ticked (grey-checked) even if all computation is delegated to Workers.
If the Master is closed or unavailable, the distributed queue cannot aggregate Worker results or coordinate execution.
This behavior effectively means:

  • The Master must have the same Python + Torch environment.
  • It must stay powered on during all remote jobs.
  • Results cannot be returned to a “headless” or non-compute node.

💡 Requested Feature

Provide an option to disable Master participation entirely, such that:

  1. Master can run in “dispatch-only / coordinator mode”, or even as a stateless lightweight service.

  2. After job dispatch, Workers could:

    • Write outputs directly to shared storage (e.g., NFS/S3).
    • Or send completion callbacks to an API endpoint rather than requiring a live Master aggregator.
  3. Allow running without GPU / PyTorch on the Master, useful for cloud-hosted or laptop-based coordination.


🧭 Why This Matters

This feature would unlock much more flexible usage patterns:

Scenario Benefit
Local PC (5080) for workflow debugging → then delegate final high-res render to remote cluster Can shut down local PC once job is dispatched
Completely non-compute orchestration node (e.g., cloud VPS) Enables centralized scheduling across multiple GPU servers
Multi-user lab setups One lightweight web Master per user, many shared Workers underneath

Currently, the Master still needs to stay online and have Torch installed, otherwise results aggregation fails.


⚙️ Possible Implementations

  • Add a --no-master-runtime or --dispatch-only flag.
  • Allow Worker nodes to write outputs to shared storage and notify via REST WebSocket API.
  • Introduce an optional lightweight “Result Collector” microservice to replace the Master aggregation process.

🙏 Thanks

This project is already a huge improvement for multi-GPU / distributed workflows.
Adding this “headless Master” mode would make it ideal for hybrid local+remote setups like mine, enabling better power management and flexibility.


Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions