Description
What steps did you take and what happened:
While testing applying cluster template with fakeIPA environment the provisioning of bmh did not start though the machine was in the provisoning state and I see the error in the CAPM3
E0925 06:16:26.153022 1 controller.go:324] "msg"="Reconciler error" "error"="Failed to create secrets: NIC name not found enp1s0" "Metal3Data"={"name":"test1-workers-template-0","namespace":"metal3"} "cont
roller"="metal3data" "controllerGroup"="infrastructure.cluster.x-k8s.io" "controllerKind"="Metal3Data" "name"="test1-workers-template-0" "namespace"="metal3" "reconcileID"="75cdec08-a060-4a6f-b69e-21b0d80dc072" I0925 06:16:26.154381 1 metal3data_manager.go:163] "msg"="Metadata is part of Metal3DataTemplate" "cluster"="test1" "logger"="controllers.Metal3Data.Metal3Data-controller" "metal3-data"={"Namespace":"metal
3","Name":"test1-workers-template-0"}
The interface name of the fake nodes was eth1
and that what I see on the inspected BMH:
hardware:
cpu:
arch: x86_64
clockMegahertz: 2100
count: 2
flags:
- fpu
- fxsr
- mmx
- sse
- sse2
firmware:
bios: {}
nics:
- ip: 172.22.0.100
mac: 00:5c:52:31:3a:9c
model: 0x1af4 0x0001
name: eth1
systemVendor: {}
using the same names used in the cluster template in the fake VMs : enp1s0, enp2s0
fixed the issue but this might not be the wanted behavior since the nic names are specified by the OS so the cluster template names can be different and should not break the provisioning.
What did you expect to happen:
CAPM3 should still be able to continue if the nic names are different from the template the only required ID should be the MAC address.
Anything else you would like to add:
fakeIPA PR discussion :
metal3-io/utility-images#20
Environment:
- Cluster-api version: v1.8.3
- Cluster-api-provider-metal3 version: v1.8.1
- Environment (metal3-dev-env or other): dev-env + fakeIPA
- Kubernetes version: (use
kubectl version
):
Client Version: v1.31.0
Kustomize Version: v5.4.2
Server Version: v1.30.0
/kind bug
Activity