-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MGMT-3603 Changes for single node IPv6 #441
MGMT-3603 Changes for single node IPv6 #441
Conversation
cluster_info = client.cluster_get(cluster_id) | ||
cluster_info.vip_dhcp_allocation = True | ||
cluster_info.machine_network_cidr = machine_net.cidr_v4 | ||
if set_vip_dhcp_allocation: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no need for if ,just
cluster_info.vip_dhcp_allocation = set_vip_dhcp_allocation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually since python is typeless, prefer to set it to True explicitly
discovery-infra/start_discovery.py
Outdated
client.update_cluster(cluster_id, cluster_info) | ||
|
||
def _get_ip_for_single_node_v6(client, cluster_id): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i want to have only one place that get host ips
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one gets IP that belongs to the machine CIDR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but most of the logic is the same, so lets use the existing one and just verify machine-cidr ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And also I want to verify that is is IPv6. Anyway, I will change
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This approach will prefer IPv4 over IPv6. Not sure I want that
@@ -288,7 +302,7 @@ def _cluster_create_params(): | |||
ipv4 = args.ipv4 and args.ipv4.lower() in MachineNetwork.YES_VALUES | |||
ipv6 = args.ipv6 and args.ipv6.lower() in MachineNetwork.YES_VALUES | |||
ntp_source = _get_host_ip_from_cidr(args.vm_network_cidr6 if ipv6 and not ipv4 else args.vm_network_cidr) | |||
user_managed_networking = is_none_platform_mode() | |||
user_managed_networking = is_user_managed_networking() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one of the ideas not to change this flag was to verify that there is no need to set it true in cluster create call
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seemed to me that the definition of is_user_managed_networking here was incorrect. It is not just for none platform but for single node as well
@@ -394,7 +408,7 @@ def nodes_flow(client, cluster_name, cluster): | |||
cluster_info = client.cluster_get(cluster.id) | |||
macs = utils.get_libvirt_nodes_macs(nodes_details["libvirt_network_name"]) | |||
|
|||
if not (cluster_info.api_vip and cluster_info.ingress_vip) and args.master_count != 1: | |||
if not (cluster_info.api_vip and cluster_info.ingress_vip): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure i understand this condition, maybe you know?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whoever did it didn't want to set vips or machine-cidr for none platform
terraform_files/baremetal/main.tf
Outdated
@@ -162,24 +162,24 @@ resource "libvirt_domain" "worker" { | |||
|
|||
data "libvirt_network_dns_host_template" "api" { | |||
count = 1 | |||
ip = var.bootstrap_in_place ? var.libvirt_master_ips[count.index][0] : var.api_vip | |||
ip = var.bootstrap_in_place && !var.ipv6 ? var.libvirt_master_ips[count.index][0] : var.api_vip |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you explain the logic please ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When using ipv6, we do not know the ips before and we cannot set before. Therefore, for DNS, we set all to point to api-vip, and when the node IP is known, we set the api-vip to point to the node-ip. Here we just set all the DNS names to point to the API vip
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it was added to support adding dns for single node and single node have no api vip. so i don't understand the logic
IMO this PR may interfere with static IPs, e.g. in how it handles the API VIP. What do you think @yevgeny-shnaidman ? |
59adc18
to
d9f185a
Compare
d9f185a
to
826092f
Compare
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ori-amizur, tsorya The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
cluster_info.machine_network_cidr = machine_net.cidr_v4 | ||
if set_vip_dhcp_allocation: | ||
cluster_info.vip_dhcp_allocation = True | ||
cluster_info.machine_network_cidr = machine_net.cidr_v6 if machine_net.has_ip_v6 and not machine_net.has_ip_v4 else machine_net.cidr_v4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can be simplified as machine_net.cird_v4 if machine_net.has_ip_v4 else machine_net.cidr_v6
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is not the same
/retest Please review the full test history for this PR and help us cut down flakes. |
1 similar comment
/retest Please review the full test history for this PR and help us cut down flakes. |
if args.vip_dhcp_allocation: | ||
if args.master_count == 1 and machine_net.has_ip_v6 and not machine_net.has_ip_v4: | ||
set_cluster_machine_cidr(client, cluster.id, machine_net, set_vip_dhcp_allocation=False) | ||
tf.change_variables({"libvirt_master_ips": [[helper_cluster.Cluster.get_ip_for_single_node_v6(client, cluster.id, args.vm_network_cidr6)]]}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line will break static IPs as it tries to update TF variables. Currently this causes an attempt to set a MAC address, which fails.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe if not args.with_static_ips:
can solve it.
/retest Please review the full test history for this PR and help us cut down flakes. |
1 similar comment
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
4 similar comments
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest |
/cc @empovit
/cc @tsorya