-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Remove the need for worker_name
to simplify scaling
#8084
Comments
Coming from the Kubernetes angle, there's a common method that's used in there to solve this exact issue, where the internal cluster DNS will generate multi-value records that let you look up all the active pods on a certain service through A and PTR lookups. Though considering Redis can be used for replication now, perhaps that could be a better solution - albeit one that will make Redis a much harder dependency for Synapse with workers. |
A possible solution for Kubernetes would be to use StatefulSets + Environment Variables Then we need to change this line in the source code
P.S. As a temporary solution in the current situation |
The issue with using a StatefulSet to get a stable name is that they're designed for running applications where the runtime state is extremely important - things like databases. I already see far too many stateless things that start up as STS simply to get stable names. |
This also will not help with other solutions like compose, swarm etc. |
Could someone make the documentation more detail to clarify this point? |
I still feel - even more strongly now - that using Redis for this is the right way to go, that offers a good channel for querying all running workers. |
Description:
The recent-ish changes to workers now recommend to set a unique
worker_name
per worker process. This makes scaling the number of workers quite a bit more complex than previously since now every instance needs a tailor made config file.This is especially annoying when using things like docker-compose, swarm or kubernetes where spinning up multiple (identical) instances of a service is a built-in feature.
I have no concrete proposal on how to solve the problem(s) the worker name solves though. (AFAICS they're only really necessary for reverse mapping for federation senders and stream writers?)
The text was updated successfully, but these errors were encountered: