-
Notifications
You must be signed in to change notification settings - Fork 103
multi: allow disabling of sub-servers #537
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
6a1b082
to
75d7659
Compare
ok - it is ready now @guggero :) has all the additions that we discussed off-line |
trying to see if I can add an itest quickly |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice, LGTM 🎉
16faf0e
to
29f7ef4
Compare
cool - added an itest that runs the test suite against a lit node where all the subservers are disabled. |
I was playing with this today just to see what breaks in the UI when disabling the daemons. I expected to be errors when disabling Loop or Pool, so no surprises there. But in one case, there's a panic occurring which was a bit surprising. When running with remote LND, Pool, Faraday and disabled Loop, the panic below occurs only on the first run of
In my docker setup, the I tested a few other combinations of |
Hi @jamaljsr - thanks for this! When you say " remote LND, Pool, Faraday and disabled Loop", do you mean that lnd, pool & faraday are all being run in remote mode or just LND? Is this consistently happening after the call to "LITD: Handling gRPC web request: /poolrpc.Trader/NodeRatings" I tried to quickly re-create this now but have not yet been successful. Will try again tomorrow :) |
Yes, I am using the flags
Yes, its always triggered by the |
ok cool! I will try re-create tomorrow with those settings (i wasnt running them all in remote mode). Thanks again for finding this! 🐞 |
I tested removing the call to
I also tested with remote Loop and disabled Pool (flags
What's strange is that after a |
thanks for the extra info @jamaljsr :) I have tried this configuration now and have still not yet been able to reproduce 🤔 probably worth noting that i'm not running them all in docker containers. Im just running everything locally. Very strange that it also works for you after restarting.... I wonder what this could be... |
ok I thiiiink I may have found something that might lead to this panic... brb |
89c4691
to
ec8b0e7
Compare
ec8b0e7
to
6679d2d
Compare
@jamaljsr , can you let me know if you still run into this panic with the latest version? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🎉
6679d2d
to
22f0264
Compare
I just tested 6679d2d and can confirm that the panic no longer occurs in the two scenarios mentioned in #537 (comment). Great work resolving this even though you couldn't repro it yourself 💪 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me 🚀! Just a small fix below that I've noted. Super excited to get into the codebase as I'm new to it!
A general question that's related to the fix though:
Would it make sense to include disabling of some subserver, while the rest are enabled, and if so include testing of that? This would require quite a bit of changes to the testing code though..
22f0264
to
0f85a9d
Compare
Thanks for the review @ViktorTigerstrom! Yeah good question regarding testing multiple configurations. You can disable some while enabling others but I think the current test is good enough since these sub-servers don't interact with each-other. The test suite is also pretty heavy right now so would like to prevent unnecessarily running it unless for a specific configuration unless it is a specific configuration we explicitly want to test for. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome, thanks 🚀
You can disable some while enabling others but I think the current test is good enough since these sub-servers don't interact with each-other.
Ok makes sense :)!
awesome! Thanks for the review everyone 🙏 just a note for anyone looking here & seeing it has 2 approvals: Pls dont merge yet :) we want to wait for the front-end side of things to be ready to handle the case where sub-servers are disabled. We might actually first want to merge #541 so that we dont allow disabling of subservers until the front end can query the status. In that case, I can definitely decouple the PRs so that we can merge that one first. Lemme know |
Enable to start litd with taproot asset subserver disabled. The default mode for the new sub-server is "Disabled" Add coverage in itests for flows with some subservers disabled (based on Elle's #537)
Enable to start litd with taproot asset subserver disabled. The default mode for the new sub-server is "Disabled" Add coverage in itests for flows with some subservers disabled (based on Elle's #537)
0f85a9d
to
7a848fd
Compare
Is this branch stable enough for me to use if I use the
soon. |
If so, would appreciate if you could rebase on top of #598 . |
I think this PR is going to be replaced by #541. @ellemouton can we close this? Looks like we're including both the commits of this PR in #541. |
yes indeed - we can close this :) @AndySchroder - please see #541 instead |
With this PR, a new "mode" is added for subservers to complement the existing "integrated" and "remote" modes.
The new mode is "disable" which will allow users to pick and choose which subservers they want Litd to start up with.
For example:
This mode addition is added in the last commit of the PR. The prior commits prepare the code for the change by making all subserver logic a bit more contained within the subserver manager