-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement Agent service #744
Conversation
@markmandel For whenever you're back, I can't replicate this test failing locally, and I don't have permissions to retry the cloud build. |
If you ever want to retry one of your builds:
Will restart the build with no changes to the code 👍🏻 But if you get a chance to fix the conflict, let's see if the error repeats. Might be a flake. |
a39e297
to
02ee45d
Compare
Rebuilding, let's see what happens. |
Okay, looks like it's a pretty consistent failure.
Next step will be for me to attempt to replicate locally |
Can replicate this locally as well.
I'll see if I can narrow down where this is failing, but what do you see on your end when you run the above command? |
tests/qcmp.rs
Outdated
}; | ||
let server_config = std::sync::Arc::new(quilkin::Config::default()); | ||
let (_, rx) = tokio::sync::watch::channel(()); | ||
tokio::spawn(async move { agent.run(server_config, rx).await }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tokio::spawn(async move { agent.run(server_config, rx).await }); | |
tokio::spawn(async move { agent.run(server_config, rx).await.expect("Agent should run") }); |
This showed me the issue that:
called `Result::unwrap()` on an `Err` value: Elapsed(())
thread 'agent_ping' panicked at 'Agones should run: Address already in use (os error 98)
Location:
/home/mark/workspace/quilkin/src/cli/agent.rs:98:26', tests/qcmp.rs:50:14
stack backtrace:
Doesn't look like qcmp is starting.
src/protocol.rs
Outdated
|
||
tracing::info!(%port, "spawning IPv4 and IPv6 QCMP sockets"); | ||
for (address, socket) in [ | ||
(ipv4, tokio::net::UdpSocket::bind(ipv4).await?), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like this is the issue - attempting to bind both an ipv4 address and an ipv6 address to the same port is causing it to already have the same port in use.
If you want to do this, I expect you'll want to use / extend utils::net::socket_with_reuse
so that you can reuse the port. You will want to listen to both sockets, but that should be handle-able with a select!
I was messing about with this PR, and started taking a stab at a UdpSocket we could reuse that would work for both ipv6 and ipv4 local ports. Let me know if you think it's a good idea, and I'll finish it off: Pretty sure it's what we'll need for #690 |
I got it all working - filing a PR against this PR. I rebased against |
02ee45d
to
4545993
Compare
Build Succeeded 🥳 Build Id: ce64722a-ae1d-46f1-9b3d-218861425ebe The following development images have been built, and will exist for the next 30 days: To build this version:
|
This PR adds a new service called "Agent", the goal of this service to provide a service more designed for relay setups by only having configuration forwarding rather than also being a management server, and this also has a QCMP service that allows it singable, which will be used in to be able to act as pingable "beacon" as it were in a given cluster.
Following this PR I intend to make a follow-up that removes the
--relay
flag from the management server (as it's not really needed anymore withagent
), and moves the proxy's QCMP implementation to be its own port using the same task the agent uses to bring those services in line with this one.