Replies: 18 comments 1 reply
-
@deajan
|
Beta Was this translation helpful? Give feedback.
-
Sure
Dates are recent because I disconnected / reconnected eno2 just to make sure (and yes, the management server still complains) |
Beta Was this translation helpful? Give feedback.
-
@deajan |
Beta Was this translation helpful? Give feedback.
-
Sure:
|
Beta Was this translation helpful? Give feedback.
-
@weizhouapache While digging into my problem, I am thinking of a potential issue Do the bridges need to be physically connected ? I have one bridge (br_bgp0) that is "fed" by a vxlan interface (not connected to a physical interface but to a wireguard instance in order to transport some public IPs into my lab) on this KVM host. Is this "allowed" ? If not, that could perhaps explain why cloudstack complains about my bridges, even though they work. |
Beta Was this translation helpful? Give feedback.
-
change the log level and restart cloudstack-agent ?
yes, that's ok |
Beta Was this translation helpful? Give feedback.
-
Hmmm... interesting.
Looks like indeed using a bridge in cloudstack is limited to physical interfaces.
Now the agent doesn't complain anymore, since there's now a I guess that indeed the bridge tests are a bit too restrictive. I can do python and bash PRs, but I really am not java fluent. @weizhouapache Big thanks for the hints. |
Beta Was this translation helpful? Give feedback.
-
that's strange.
|
Beta Was this translation helpful? Give feedback.
-
I think I mixed up br_npf0 and br_bgp0 in my tests (deleted & recreated bridges multiple times in order to diagnose my issue). Point is, my bgp interface, the bridge br_bgp0 with a vxlan interface, wasn't accepted by Cloudstack agent since the bridge test complained that there wasn't a physical interface. There's still no physical interface yet, but I fooled the agent test by creating a dummy interface slave in that bridge and naming it ethdummy0.
I think those tests should be relaxed, as I think they only should test that there are other interfaces than vnet* connected to the bridge. |
Beta Was this translation helpful? Give feedback.
-
hmm, has the kvm host been added to cloudstack ? |
Beta Was this translation helpful? Give feedback.
-
Yes, host is now visible and manageable in cloudstack. Anything I need to check ? |
Beta Was this translation helpful? Give feedback.
-
Are the system vms Running and agent states are Up ? |
Beta Was this translation helpful? Give feedback.
-
@deajan |
Beta Was this translation helpful? Give feedback.
-
@weizhouapache You mean like this ? I don't have system VMs yet (it's a fresh cloudstack lab setup), but host is up and running according to management server.
Of course, this will prevent me from using cloudstack properly until I reconnect eno2, but at it validates the assumption. |
Beta Was this translation helpful? Give feedback.
-
I am a bit confused about br_npf0 and br_bgp0 do you think anything we could improve or fix ? |
Beta Was this translation helpful? Give feedback.
-
Sorry, as I said, at one moment, I think I have mixed both while doing tests creating/deleting bridges to find what the issue is. So basically the test to see whether there is I would change that check to report successful if at least one slave different than "vnet*" "or "vmbr*" is present (so we outrule running VM interfaces from uplinks). This would allow to have bridge slaves interfaces being VXLan, GRETAP, Geneve or whatever fancy interfaces someone would like to use. It would also allow Cloudstack to be future proof if someday a new ethernet driver naming comes out. |
Beta Was this translation helpful? Give feedback.
-
as far as I know, |
Beta Was this translation helpful? Give feedback.
-
You are right, I actually ment |
Beta Was this translation helpful? Give feedback.
-
problem
So I added a KVM hypervisor running AlmaLinux 9.5 to a Cloudstack Management Server via the UI, which failed with error
Unable to add the host: Cannot find the server resources at <host>
While digging in the cloudstack management-server.log file, I noticed that my bridge
br_npf0
is not found according to the management server:Looking at my configuration on the KVM host, the bridge
br_npf0
exists, is up, has an IP, and can ping the management server.The management server zone is setup with that exact same bridge name:
Is there any direction to point me to ?
versions
Cloudstack 4.20 running on AlmaLinux 9.5
KVM host AlmaLinux 9.5 with bridge setup via NetworkManager
The steps to reproduce the bug
What to do about it?
No response
Beta Was this translation helpful? Give feedback.
All reactions