You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- If running in p2p mode with container images, make sure you start the container with `--net host` or `network_mode: host` in the docker-compose file.
128
-
- Only a single model is supported currently.
129
-
- Ensure the server detects new workers before starting inference. Currently, additional workers cannot be added once inference has begun.
130
-
- For more details on the implementation, refer to [LocalAI pull request #2343](https://github.com/mudler/LocalAI/pull/2343)
131
125
132
126
## Environment Variables
133
127
@@ -138,3 +132,20 @@ There are options that can be tweaked or parameters that can be set using enviro
138
132
|**LOCALAI_P2P_DISABLE_DHT**| Set to "true" to disable DHT and enable p2p layer to be local only (mDNS) |
139
133
|**LOCALAI_P2P_DISABLE_LIMITS**| Set to "true" to disable connection limits and resources management |
140
134
|**LOCALAI_P2P_TOKEN**| Set the token for the p2p network |
135
+
136
+
## Architecture
137
+
138
+
LocalAI uses https://github.com/libp2p/go-libp2p under the hood, the same project powering IPFS. Differently from other frameworks, LocalAI uses peer2peer without a single master server, but rather it uses sub/gossip and ledger functionalities to achieve consensus across different peers.
139
+
140
+
[EdgeVPN](https://github.com/mudler/edgevpn) is used as a library to establish the network and expose the ledger functionality under a shared token to ease out automatic discovery and have separated, private peer2peer networks.
141
+
142
+
The weights are split proportional to the memory when running into worker mode, when in federation mode each request is split to every node which have to load the model fully.
143
+
144
+
## Notes
145
+
146
+
- If running in p2p mode with container images, make sure you start the container with `--net host` or `network_mode: host` in the docker-compose file.
147
+
- Only a single model is supported currently.
148
+
- Ensure the server detects new workers before starting inference. Currently, additional workers cannot be added once inference has begun.
149
+
- For more details on the implementation, refer to [LocalAI pull request #2343](https://github.com/mudler/LocalAI/pull/2343)
0 commit comments