1
1
# Scaling synapse via workers
2
2
3
- For small instances it recommended to run Synapse in monolith mode (the
4
- default). For larger instances where performance is a concern it can be helpful
5
- to split out functionality into multiple separate python processes. These
6
- processes are called 'workers', and are (eventually) intended to scale
7
- horizontally independently.
3
+ For small instances it recommended to run Synapse in the default monolith mode.
4
+ For larger instances where performance is a concern it can be helpful to split
5
+ out functionality into multiple separate python processes. These processes are
6
+ called 'workers', and are (eventually) intended to scale horizontally
7
+ independently.
8
8
9
9
Synapse's worker support is under active development and subject to change as
10
10
we attempt to rapidly scale ever larger Synapse instances. However we are
@@ -23,29 +23,30 @@ The processes communicate with each other via a Synapse-specific protocol called
23
23
feeds streams of newly written data between processes so they can be kept in
24
24
sync with the database state.
25
25
26
- Additionally, processes may make HTTP requests to each other. Typically this is
27
- used for operations which need to wait for a reply - such as sending an event.
26
+ When configured to do so, Synapse uses a
27
+ [ Redis pub/sub channel] ( https://redis.io/topics/pubsub ) to send the replication
28
+ stream between all configured Synapse processes. Additionally, processes may
29
+ make HTTP requests to each other, primarily for operations which need to wait
30
+ for a reply ─ such as sending an event.
28
31
29
- As of Synapse v1.13.0, it is possible to configure Synapse to send replication
30
- via a [ Redis pub/sub channel] ( https://redis.io/topics/pubsub ) , and is now the
31
- recommended way of configuring replication. This is an alternative to the old
32
- direct TCP connections to the main process: rather than all the workers
33
- connecting to the main process, all the workers and the main process connect to
34
- Redis, which relays replication commands between processes. This can give a
35
- significant cpu saving on the main process and will be a prerequisite for
36
- upcoming performance improvements.
32
+ Redis support was added in v1.13.0 with it becoming the recommended method in
33
+ v1.18.0. It replaced the old direct TCP connections (which is deprecated as of
34
+ v1.18.0) to the main process. With Redis, rather than all the workers connecting
35
+ to the main process, all the workers and the main process connect to Redis,
36
+ which relays replication commands between processes. This can give a significant
37
+ cpu saving on the main process and will be a prerequisite for upcoming
38
+ performance improvements.
37
39
38
- ( See the [ Architectural diagram] ( #architectural-diagram ) section at the end for
39
- a visualisation of what this looks like)
40
+ See the [ Architectural diagram] ( #architectural-diagram ) section at the end for
41
+ a visualisation of what this looks like.
40
42
41
43
42
44
## Setting up workers
43
45
44
46
A Redis server is required to manage the communication between the processes.
45
- (The older direct TCP connections are now deprecated.) The Redis server
46
- should be installed following the normal procedure for your distribution (e.g.
47
- ` apt install redis-server ` on Debian). It is safe to use an existing Redis
48
- deployment if you have one.
47
+ The Redis server should be installed following the normal procedure for your
48
+ distribution (e.g. ` apt install redis-server ` on Debian). It is safe to use an
49
+ existing Redis deployment if you have one.
49
50
50
51
Once installed, check that Redis is running and accessible from the host running
51
52
Synapse, for example by executing ` echo PING | nc -q1 localhost 6379 ` and seeing
@@ -65,8 +66,9 @@ https://hub.docker.com/r/matrixdotorg/synapse/.
65
66
66
67
To make effective use of the workers, you will need to configure an HTTP
67
68
reverse-proxy such as nginx or haproxy, which will direct incoming requests to
68
- the correct worker, or to the main synapse instance. See [ reverse_proxy.md] ( reverse_proxy.md )
69
- for information on setting up a reverse proxy.
69
+ the correct worker, or to the main synapse instance. See
70
+ [ reverse_proxy.md] ( reverse_proxy.md ) for information on setting up a reverse
71
+ proxy.
70
72
71
73
To enable workers you should create a configuration file for each worker
72
74
process. Each worker configuration file inherits the configuration of the shared
@@ -75,8 +77,12 @@ that worker, e.g. the HTTP listener that it provides (if any); logging
75
77
configuration; etc. You should minimise the number of overrides though to
76
78
maintain a usable config.
77
79
78
- Next you need to add both a HTTP replication listener and redis config to the
79
- shared Synapse configuration file (` homeserver.yaml ` ). For example:
80
+
81
+ ### Shared Configuration
82
+
83
+ Next you need to add both a HTTP replication listener, used for HTTP requests
84
+ between processes, and redis config to the shared Synapse configuration file
85
+ (` homeserver.yaml ` ). For example:
80
86
81
87
``` yaml
82
88
# extend the existing `listeners` section. This defines the ports that the
@@ -98,6 +104,9 @@ See the sample config for the full documentation of each option.
98
104
Under **no circumstances** should the replication listener be exposed to the
99
105
public internet; it has no authentication and is unencrypted.
100
106
107
+
108
+ ### Worker Configuration
109
+
101
110
In the config file for each worker, you must specify the type of worker
102
111
application (` worker_app`), and you should specify a unqiue name for the worker
103
112
(`worker_name`). The currently available worker applications are listed below.
0 commit comments