-
-
Notifications
You must be signed in to change notification settings - Fork 707
Description
Bug Description
After the configured backends for a balanced pool have failed for a bit, they all end up at equal low-end weights. If a balanced pool is configured with 2 hosts and 1 connection each, at this point the current implementation will always pick the same backend for further requests instead of switching between the 2 options. This causes a disruption to expand unnecessarily until whichever Backend ended up on top of the heap becomes healthy again.
The mechanism seems to work fine when more 2 hosts are added, recovering soon after one host is available.
Reproducible By
Set up a balanced pool with 2 hosts. Spam some requests while the hosts are down. Observe requests being sent.
Expected Behavior
Hosts of equal weight have equal chances of selection, leading to successful recovery when 1 of 2 hosts are available, regardless of ordering.
Logs & Screenshots
Let me know if a particular log would help the investigation. I'm not familiar with this projects best practices.
Environment
Undici 7.2.3 in project where I first observed the bug / current main branch for reproduction on private pc
Node 22, MacOS 26.3, Fedora 43