Closed
Description
Steps to reproduce:
- Create a catalog on cluster:
kubectl apply -f config/samples/catalogd_operatorcatalog.yaml
- Create a
BundleDeployment
:
apiVersion: core.rukpak.io/v1alpha1
kind: BundleDeployment
metadata:
name: operator-sample-manual
spec:
provisionerClassName: core-rukpak-io-plain
template:
metadata: {}
spec:
provisionerClassName: core-rukpak-io-registry
source:
image:
ref: quay.io/operatorhubio/argocd-operator@sha256:1a9b3c8072f2d7f4d6528fa32905634d97b7b4c239ef9887e3fb821ff033fef6
type: image
- Create an
Operator
which resolves to the same bundle:
apiVersion: operators.operatorframework.io/v1alpha1
kind: Operator
metadata:
labels:
app.kubernetes.io/name: operator
app.kubernetes.io/instance: operator-sample
app.kubernetes.io/part-of: operator-controller
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/created-by: operator-controller
name: operator-sample
spec:
packageName: argocd-operator
version: 0.6.0
Actual result:
apiVersion: operators.operatorframework.io/v1alpha1
kind: Operator
metadata:
# ...
name: operator-sample
spec:
# ...
status:
conditions:
- lastTransitionTime: "2023-10-06T12:26:51Z"
message: resolved to "quay.io/operatorhubio/argocd-operator@sha256:1a9b3c8072f2d7f4d6528fa32905634d97b7b4c239ef9887e3fb821ff033fef6"
observedGeneration: 1
reason: Success
status: "True"
type: Resolved
- lastTransitionTime: "2023-10-06T12:26:57Z"
message: 'bundledeployment not ready: rendered manifests contain a resource that
already exists. Unable to continue with install: Namespace "argocd-operator-system"
in namespace "" exists and cannot be imported into the current release: invalid
ownership metadata; annotation validation error: key "meta.helm.sh/release-name"
must equal "operator-sample": current value is "operator-sample-manual"'
observedGeneration: 1
reason: InstallationFailed
status: "False"
type: Installed
resolvedBundleResource: quay.io/operatorhubio/argocd-operator@sha256:1a9b3c8072f2d7f4d6528fa32905634d97b7b4c239ef9887e3fb821ff033fef6
Expected result:
Not sure what would be the expected behaviour in here. Perhaps a better error message indicating conflict. Or, if we can successfully resolve, updating ownerReferences
on already existing BundleDeployment
instead of creating a new one?