Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
18 commits
Select commit Hold shift + click to select a range
9145479
OF-3176: Offload blocking Netty handlers to dedicated executor
guusdk Feb 25, 2026
ed1264b
OF-3176: Fix race condition introduced by separate executor for block…
guusdk Feb 26, 2026
d89208c
OF-3176: Ensure Netty socket acceptor is fully ready before start() r…
guusdk Feb 26, 2026
0e0b83a
OF-3176: Type NettyConnectionHandler with generic StanzaHandler param…
guusdk Feb 27, 2026
f9ddb03
OF-3176: Fix race condition on outbound S2S stanza handler initializa…
guusdk Feb 27, 2026
4cbed6e
OF-3176: Preserve backward compatibility for NettyChannelHandlerFacto…
guusdk Feb 27, 2026
39b34fa
OF-3176: Update javadoc for blockingHandlerExecutor per review feedback
guusdk Feb 27, 2026
32857b0
OF-3176: Fix NettyConnectionAcceptor start() to restore interrupt fla…
guusdk Feb 27, 2026
c0b2bac
OF-3182: Share Netty thread pools across outbound S2S connections
guusdk Feb 27, 2026
4bc00ab
OF-3176: Refactor Netty channel initializers to start TLS in channelA…
guusdk Mar 3, 2026
1b47c64
OF-3176: Guard NettyConnectionHandler.handlerAdded against exceptions…
guusdk Mar 3, 2026
2d9c8de
OF-3176: Ensure that Netty main channel startup does not block indefi…
guusdk Mar 3, 2026
f48f287
Migrate properties for Netty socket options to configurable system pr…
guusdk Mar 3, 2026
87093e2
OF-3176: start DirectTLS and enable autoRead before business logic ha…
guusdk Mar 3, 2026
d1c7ed3
OF-3176: ensure DirectTLS connections propagate channelActive only af…
guusdk Mar 3, 2026
949f6a7
OF-3176: fix: delay enabling autoRead until business logic handler is…
guusdk Mar 4, 2026
ff30fd9
Revert "OF-3180: Offload ConnectionCloseListener invocation to a dedi…
guusdk Mar 4, 2026
f780f1e
Revert "OF-3180: Refactor ConnectionCloseListener implementations for…
guusdk Mar 4, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 16 additions & 2 deletions documentation/internal-networking.html
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,12 @@ <h2>Netty in Openfire</h2>
These Connection Listeners each create new NettyConnectionAcceptors which bootstraps networking
connection listeners using Netty.
</p>
<p>
In addition to Netty's standard acceptor (boss) and I/O worker event loop groups, Openfire introduces a
dedicated executor for handlers that may perform blocking or long-running operations. This executor is
shared across all channels created by a connection acceptor and is used to ensure that Netty EventLoop
threads remain strictly non-blocking and responsive.
</p>

<figure>
<a href="images/netty-in-openfire.png"><img src="images/netty-in-openfire.png" alt="Diagram showing how Netty fits into Openfire"></a>
Expand All @@ -91,8 +97,9 @@ <h2>Netty in Openfire</h2>
Zooming in on the above diagram to the S2S and C2S child-group channel pipelines we can see specific
<a href="https://netty.io/4.1/api/io/netty/channel/ChannelHandler.html">ChannelHandler</a>
implementations that handle TLS, XMPP stanza decoding, stalled session logic, and the core Openfire
"business logic", with a different business logic connection handler depending on whether this is
a Server to Server connection or a Client to Server connection.
"business logic". While protocol framing and I/O-related handlers execute on Netty's I/O worker event
loops, handlers that may perform blocking or long-running work (such as authentication, routing, or
persistence) are explicitly offloaded to a dedicated blocking handler executor.
</p>

<figure>
Expand Down Expand Up @@ -134,6 +141,13 @@ <h2>Message Handling with Netty in Openfire</h2>
systems within Openfire.
</p>

<p>
It is important to note that not all message processing occurs on Netty EventLoop threads. While
socket reads and writes are handled by the I/O worker event loops, Openfire ensures that handlers
which may block or take significant time to execute are invoked on a separate blocking handler
executor. This design prevents EventLoop starvation and ensures consistent throughput under load.
</p>

<figure>
<a href="images/netty-message-handling.png"><img src="images/netty-message-handling.png" alt="Diagram showing how Openfire handles messages with Netty"></a>
<figcaption>How Openfire handles messages with Netty.</figcaption>
Expand Down
11 changes: 8 additions & 3 deletions i18n/src/main/resources/openfire_i18n.properties
Original file line number Diff line number Diff line change
Expand Up @@ -1278,6 +1278,14 @@ system_property.admin.authorizedJIDs=The bare JID of every admin user for the De
system_property.xmpp.auth.ssl.context_protocol=The TLS protocol to use for encryption context initialization, overriding the Java default.
system_property.xmpp.parser.buffer.size=Maximum size of an XMPP stanza. Larger stanzas will cause a connection to be closed.
system_property.xmpp.auth.ssl.enforce_sni=Controls if the server enforces the use of SNI (Server Name Indication) when clients connect using TLS.
system_property.xmpp.socket.backlog=Sets the maximum number of incoming connection requests that can be queued before the OS starts rejecting new connections.
system_property.xmpp.socket.keepalive=Enables TCP keep-alive on the socket to periodically check if the connection is still alive and prevent idle connections from being silently dropped.
system_property.xmpp.socket.tcp-nodelay=Controls whether the Nagle algorithm is disabled (true) to send small packets immediately or enabled (false) to coalesce them.
system_property.xmpp.socket.tcp-reuseaddr=Allows the socket to bind to a port even if a previous connection on that port is in the TIME_WAIT state, enabling faster server restarts.
system_property.xmpp.socket.buffer.send=Defines the size (in bytes) of the TCP send buffer for accepted sockets.
system_property.xmpp.socket.buffer.receive=Defines the size (in bytes) of the TCP receive buffer for accepted sockets.
system_property.xmpp.socket.linger=Specifies how long a socket should wait to send remaining data when closed; negative disables SO_LINGER.
system_property.xmpp.socket.startup-latch-timeout=Timeout in seconds to wait for the boss EventLoop to process the startup barrier task before logging a warning.
system_property.xmpp.socket.ssl.active=Set to true to enable Direct TLS encrypted connections for clients, otherwise false
system_property.xmpp.socket.ssl.certificate.revocation.only-end-entity=Only verify revocation status of end-entity (leaf) certificates
system_property.xmpp.socket.ssl.certificate.revocation.soft-fail=Allow validation to continue if revocation information is unavailable
Expand Down Expand Up @@ -1484,9 +1492,6 @@ system_property.ssl.certificates.expirycheck.notify-admins=Determines if the adm
system_property.ssl.certificates.expirycheck.frequency=How often to check for (nearly) expired TLS certificates.
system_property.ssl.certificates.expirycheck.warning-period=How long (in hours) before a TLS certificate will expire for a warning is to be sent out to admins.

system_property.xmpp.connectionmanager.eventexecutor.threads=Maximum number of threads used to process connection events.
system_property.xmpp.connectionmanager.eventexecutor.timeout=Duration that unused, surplus threads that once processed connection events are kept alive.

# Server properties Page

server.properties.title=System Properties
Expand Down
11 changes: 8 additions & 3 deletions i18n/src/main/resources/openfire_i18n_nl.properties
Original file line number Diff line number Diff line change
Expand Up @@ -1189,6 +1189,14 @@ system_property.admin.authorizedJIDs=De bare JID van elke administratieve gebrui
system_property.xmpp.auth.ssl.context_protocol=Het TLS protocol dat gebruikt wordt om de encryptie-context te initialiseren. Deze instelling vervangt de standaard-waarde van Java.
system_property.xmpp.parser.buffer.size=Maximale grootte van een XMPP stanza. Grotere stanzas hebben tot gevolg dat de verbinding wordt afgesloten.
system_property.xmpp.auth.ssl.enforce_sni=Bepaalt of de server het gebruik van SNI (Server Name Indication) afdwingt wanneer cliënten verbinding maken via TLS.
system_property.xmpp.socket.backlog=Stelt het maximale aantal inkomende verbindingsaanvragen in dat in de wachtrij kan worden geplaatst voordat het besturingssysteem nieuwe verbindingen begint te weigeren.
system_property.xmpp.socket.keepalive=Schakelt TCP keep-alive in op de socket om periodiek te controleren of de verbinding nog actief is en voorkomt dat inactieve verbindingen ongemerkt worden verbroken.
system_property.xmpp.socket.tcp-nodelay=Geeft aan of het Nagle-algoritme is uitgeschakeld (true) om kleine pakketten direct te verzenden, of ingeschakeld (false) om ze te combineren.
system_property.xmpp.socket.tcp-reuseaddr=Staat toe dat de socket zich aan een poort bindt, zelfs als een vorige verbinding op die poort in de TIME_WAIT-status staat, wat snellere serverherstarts mogelijk maakt.
system_property.xmpp.socket.buffer.send=Definieert de grootte (in bytes) van de TCP verzendbuffer voor geaccepteerde sockets.
system_property.xmpp.socket.buffer.receive=Definieert de grootte (in bytes) van de TCP ontvangbuffer voor geaccepteerde sockets.
system_property.xmpp.socket.linger=Specificeert hoe lang een socket moet wachten om resterende gegevens te verzenden bij het sluiten; een negatieve waarde schakelt SO_LINGER uit.
system_property.xmpp.socket.startup-latch-timeout=Timeout in seconden om te wachten totdat de boss EventLoop de startup-barrier-taak heeft verwerkt voordat een waarschuwing wordt gelogd.
system_property.xmpp.socket.ssl.active=Bepaalt (indien 'true') of direct-TLS verbindingen voor clients worden aangeboden.
system_property.xmpp.socket.ssl.certificate.revocation.only-end-entity=Controleer alleen de intrekkingsstatus van eind-entiteit certificaten (leaf)
system_property.xmpp.socket.ssl.certificate.revocation.soft-fail=Laat validatie doorgaan als intrekkingsinformatie niet beschikbaar is
Expand Down Expand Up @@ -1389,9 +1397,6 @@ system_property.ssl.certificates.expirycheck.notify-admins=Bepaalt of de beheerd
system_property.ssl.certificates.expirycheck.frequency=Hoe vaak controleren op (bijna) verlopen TLS-certificaten.
system_property.ssl.certificates.expirycheck.warning-period=Hoe lang (in uren) voordat een TLS-certificaat verloopt voor een waarschuwing naar beheerders wordt gestuurd.

system_property.xmpp.connectionmanager.eventexecutor.threads=Maximum aantal threads dat wordt gebruikt om verbindingsgebeurtenissen te verwerken.
system_property.xmpp.connectionmanager.eventexecutor.timeout=Duur dat ongebruikte, overtollige threads die eerder verbindingsgebeurtenissen hebben verwerkt, actief blijven.

# Server properties Page

server.properties.title=Systeemeigenschappen
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
/*
* Copyright (C) 2005-2008 Jive Software, 2017-2026 Ignite Realtime Foundation. All rights reserved.
* Copyright (C) 2005-2008 Jive Software, 2017-2025 Ignite Realtime Foundation. All rights reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
Expand Down Expand Up @@ -34,7 +34,6 @@
import org.jivesoftware.openfire.server.OutgoingSessionPromise;
import org.jivesoftware.openfire.session.*;
import org.jivesoftware.openfire.spi.BasicStreamIDFactory;
import org.jivesoftware.openfire.spi.ConnectionManagerImpl;
import org.jivesoftware.openfire.spi.ConnectionType;
import org.jivesoftware.openfire.streammanagement.TerminationDelegate;
import org.jivesoftware.util.JiveGlobals;
Expand Down Expand Up @@ -1404,29 +1403,27 @@ public CompletableFuture<Void> onConnectionClosing(Object handback)
return CompletableFuture.completedFuture(null);
}

Log.debug("Closing client session with address {} and streamID {} that does not have SM resume.", session.getAddress(), session.getStreamID());
CompletableFuture<Void> result = CompletableFuture.runAsync(() -> Log.debug("Closing client session with address {} and streamID {} that does not have SM resume.", session.getAddress(), session.getStreamID()));

try
{
if ((session.getPresence().isAvailable() || !session.wasAvailable()) && routingTable.hasClientRoute(session.getAddress()))
{
// Send an unavailable presence to the user's subscribers. This gives us a chance to send an
// unavailable presence to the entities that the user sent directed presences
final Presence presence = new Presence();
presence.setType(Presence.Type.unavailable);
presence.setFrom(session.getAddress());
router.route(presence);
}
} finally {
if ((session.getPresence().isAvailable() || !session.wasAvailable()) && routingTable.hasClientRoute(session.getAddress())) {
// Send an unavailable presence to the user's subscribers. This gives us a chance to send an
// unavailable presence to the entities that the user sent directed presences
final Presence presence = new Presence();
presence.setType(Presence.Type.unavailable);
presence.setFrom(session.getAddress());

result = result.thenRunAsync(() -> router.route(presence));
}

// In the completion stage remove the session (which means it'll be removed no matter if the previous stage had exceptions).
return result.whenComplete((v,t) -> {
try {
session.getStreamManager().onClose(router, serverAddress);
} finally {
// Ensure that the session is removed (no matter if the previous stage had exceptions).
// Note that the session can't be removed before the unavailable presence has been sent (as session-provided data is used by the broadcast).
removeSession(session);
}
}

return CompletableFuture.completedFuture(null);
});
}

@Override
Expand All @@ -1449,21 +1446,15 @@ public CompletableFuture<Void> onConnectionClosing(Object handback)
{
final LocalIncomingServerSession session = (LocalIncomingServerSession)handback;

Log.debug("Closing incoming server session with address {} and streamID {}.", session.getAddress(), session.getStreamID());

final ConnectionManagerImpl connectionManager = (ConnectionManagerImpl) XMPPServer.getInstance().getConnectionManager();
if (connectionManager == null) {
Log.info("No ConnectionManager available (server is likely shutting down). Skipping execution of close listener for incoming server session with address {} and streamID {}.", session.getAddress(), session.getStreamID());
return CompletableFuture.failedFuture(new IllegalStateException("No ConnectionManager available (server is likely shutting down)."));
}
CompletableFuture<Void> result = CompletableFuture.runAsync(() -> Log.debug("Closing incoming server session with address {} and streamID {}.", session.getAddress(), session.getStreamID()));

// Remove all the domains that were registered for this server session.
final Collection<CompletableFuture<Void>> tasks = new ArrayList<>();
for (final String domain : session.getValidatedDomains()) {
tasks.add(connectionManager.runConnectionEventTaskAsync(() -> unregisterIncomingServerSession(domain, session)));
for (String domain : session.getValidatedDomains()) {
tasks.add(CompletableFuture.runAsync(() -> unregisterIncomingServerSession(domain, session)));
}

return CompletableFuture.allOf(tasks.toArray(new CompletableFuture[0]));
return result.thenCompose(e -> CompletableFuture.allOf(tasks.toArray(new CompletableFuture[0])));
}

@Override
Expand All @@ -1486,21 +1477,15 @@ public CompletableFuture<Void> onConnectionClosing(Object handback)
{
final OutgoingServerSession session = (OutgoingServerSession)handback;

Log.debug("Closing outgoing server session with address {} and streamID {}.", session.getAddress(), session.getStreamID());

final ConnectionManagerImpl connectionManager = (ConnectionManagerImpl) XMPPServer.getInstance().getConnectionManager();
if (connectionManager == null) {
Log.info("No ConnectionManager available (server is likely shutting down). Skipping execution of close listener for outgoing server session with address {} and streamID {}.", session.getAddress(), session.getStreamID());
return CompletableFuture.failedFuture(new IllegalStateException("No ConnectionManager available (server is likely shutting down)."));
}
CompletableFuture<Void> result = CompletableFuture.runAsync(() -> Log.debug("Closing outgoing server session with address {} and streamID {}.", session.getAddress(), session.getStreamID()));

// Remove all the domains that were registered for this server session.
final Collection<CompletableFuture<Void>> tasks = new ArrayList<>();
for (DomainPair domainPair : session.getOutgoingDomainPairs()) {
tasks.add(connectionManager.runConnectionEventTaskAsync(() -> server.getRoutingTable().removeServerRoute(domainPair)));
tasks.add(CompletableFuture.runAsync(() -> server.getRoutingTable().removeServerRoute(domainPair)));
}

return CompletableFuture.allOf(tasks.toArray(new CompletableFuture[0]));
return result.thenCompose(e -> CompletableFuture.allOf(tasks.toArray(new CompletableFuture[0])));
}

@Override
Expand All @@ -1524,21 +1509,21 @@ public CompletableFuture<Void> onConnectionClosing(Object handback)
final ConnectionMultiplexerSession session = (ConnectionMultiplexerSession)handback;
final String domain = session.getAddress().getDomain();

Log.debug("Closing multiplexer session with address {} and streamID {}.", session.getAddress(), session.getStreamID());
CompletableFuture<Void> result = CompletableFuture.runAsync(() -> Log.debug("Closing multiplexer session with address {} and streamID {}.", session.getAddress(), session.getStreamID()));

// Remove all the domains that were registered for this server session
localSessionManager.getConnnectionManagerSessions().remove(session.getAddress().toString());
result = result.thenRunAsync(() -> localSessionManager.getConnnectionManagerSessions().remove(session.getAddress().toString()));

// Remove track of the cluster node hosting the CM connection
multiplexerSessionsCache.remove(session.getAddress().toString());
result = result.thenRunAsync(() -> multiplexerSessionsCache.remove(session.getAddress().toString()));

if (getConnectionMultiplexerSessions(domain).isEmpty()) {
// Terminate ClientSessions originated from this connection manager
// that are still active since the connection manager has gone down
ConnectionMultiplexerManager.getInstance().multiplexerUnavailable(domain);
result = result.thenRunAsync(() -> ConnectionMultiplexerManager.getInstance().multiplexerUnavailable(domain));
}

return CompletableFuture.completedFuture(null);
return result;
}

@Override
Expand Down
Loading
Loading