feat(cli): add run-multi command for parallel instances#7
feat(cli): add run-multi command for parallel instances#7armamini wants to merge 2 commits intossmirr:masterfrom
Conversation
Add new 'run-multi' command that runs multiple Conduit inproxy instances in parallel for high-capacity VPS/server deployments. Features: - Run 1-32 parallel instances with --instances flag - Each instance gets separate data directory and key - Aggregated stats logging every 10 seconds - Per-instance stats files with --stats-file pattern - Graceful shutdown of all instances on Ctrl+C
|
I like the idea @armamini ! I might suggest one change to this if you can work on it, but I have two questions first:
|
|
Thanks for the feedback @ssmirr ! 🙏 Honest answer: I haven't tested with a real Psiphon network config, so I can't confirm:
I only tested the structural parts:
To properly test, I'd need a valid |
|
I am testing it now, thank you for the quick response! |
|
@ssmirr |
|
Thank you @armamini ! So far seeing great results. I will keep my test running for an hour or so and update here. |
|
@armamini I'm convinced the idea works. Below are my suggestions if you can make progress I will be able to review in a few hours later today! Note: I know it is going to be probably lots of changes. I also know what I'm asking to change here is very opinionated. But trust me... it's based on actual data from testing (myself and many different people using the tool) as well as what I've heard and seen from the original team when they gave me feedback on the original PR. Instead of making the separate multi-instance command, update the start command to:
Please see how much progress you can make if you have time. I will try to get to this later today (hopefully 🤞🏼). Thanks again for your contribution! |
Integrate multi-instance into start command instead of separate run-multi. Auto-splits based on max-clients (1 instance per 100 clients). Names directories by key short hash for easy identification. Prefixes all logs with key hash for merged output.
|
Hey @ssmirr Thanks for your feedback! Based on your suggestion, I've refactored the multi-instance feature:
So how does it work now? # Single instance (default behavior - unchanged)
conduit start -c config.json -m 50
# Multi-instance mode (new)
conduit start -c config.json -m 300 --multi-instanceWhen --multi-instance is enabled:
Fancy test output $ ./dist/conduit start -c config.json -m 350 --multi-instance -v
Starting 4 Psiphon Conduit instances (Max Clients/instance: 87, Bandwidth: 40 Mbps)
[acad1941] Starting with data dir: ./data/acad1941
[53d2cd8a] Starting with data dir: ./data/53d2cd8a
[0307eae3] Starting with data dir: ./data/0307eae3
[04004df3] Starting with data dir: ./data/04004df3Ba Ehteram! |
|
Here's my testing:
multiplexer package conduit
import (
"context"
"fmt"
"sync"
"time"
"github.com/Psiphon-Inc/conduit/cli/internal/config"
"github.com/Psiphon-Labs/psiphon-tunnel-core/psiphon"
)
// NoticeMultiplexer dispatches psiphon notices to multiple handlers
type NoticeMultiplexer struct {
handlers []func([]byte)
mu sync.RWMutex
}
func (m *NoticeMultiplexer) AddHandler(h func([]byte)) {
m.mu.Lock()
m.handlers = append(m.handlers, h)
m.mu.Unlock()
}
func (m *NoticeMultiplexer) Handle(notice []byte) {
m.mu.RLock()
defer m.mu.RUnlock()
for _, h := range m.handlers {
h(notice)
}
}
func NewMultiService(configs []*config.Config) (*MultiService, error) {
mux := &NoticeMultiplexer{}
instances := make([]*Instance, len(configs))
for i, cfg := range configs {
service, _ := New(cfg)
service.SetSharedMultiplexer()
mux.AddHandler(service.GetNoticeHandler())
instances[i] = &Instance{ID: i, Config: cfg, Service: service}
}
// Key fix: set global notice writer ONCE with multiplexer
psiphon.SetNoticeWriter(psiphon.NewNoticeReceiver(mux.Handle))
return &MultiService{instances: instances, multiplexer: mux}, nil
} service.go // Service struct - add field:
type Service struct {
config *config.Config
controller *psiphon.Controller
stats *Stats
mu sync.RWMutex
useSharedMux bool // If true, don't set global notice writer (multi-instance mode)
}
// Add new type and methods:
type NoticeHandler func([]byte)
func (s *Service) SetSharedMultiplexer() {
s.useSharedMux = true
}
func (s *Service) GetNoticeHandler() NoticeHandler {
return s.handleNotice
}
// Modify Run() - wrap SetNoticeWriter in conditional:
func (s *Service) Run(ctx context.Context) error {
// Skip if using shared multiplexer (multi-instance mode)
if !s.useSharedMux {
psiphon.SetNoticeWriter(psiphon.NewNoticeReceiver(
func(notice []byte) {
s.handleNotice(notice)
},
))
}
// ... rest of Run() unchanged
} This is hacky and I'm not big on Go so feel free to ignore. |
|
The
The fundamental flaw is Psiphon notices don't include instance identification. When a notice like
So... stats are triple-counted (or N-counted for N instances). |
|
BUT Psiphon also has a Each instance:
Lets see if this works 👀 |
|
Sorry I don't think this approach would work at all without further changes to the upstream psiphone proxy code. I have an idea with a different approach to support multi instance and will add that separately outside of this PR. |
Hey @ssmirr
To support your move, I added a simple way to run multiple Conduit instances in parallel on a VPS. Each instance gets its own key and reputation, so you can help more users with the same server.
Why?
Usage