Skip to content

Conversation

@anik120
Copy link

@anik120 anik120 commented Sep 19, 2025

No description provided.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Sep 19, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign coverprice for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@anik120 anik120 force-pushed the olmv1-single-own-namespace branch from c9b7c0f to ac0ba97 Compare September 19, 2025 19:10
@anik120 anik120 changed the title OLMv1 single/own namespace install mode support OPRUN-4133: OLMv1 single/own namespace install mode support Sep 19, 2025
@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Sep 19, 2025
@openshift-ci-robot
Copy link

openshift-ci-robot commented Sep 19, 2025

@anik120: This pull request references OPRUN-4133 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the task to target the "4.21.0" version, but no target version was set.

In response to this:

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@anik120 anik120 force-pushed the olmv1-single-own-namespace branch from ac0ba97 to 8d51ffd Compare October 3, 2025 02:40
Copy link
Author

@anik120 anik120 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@perdasilva addressed your feedback, PTAL

//
// inline must be set if configType is 'Inline'.
//
// +kubebuilder:validation:Type=object

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be worth calling out that adding this annotation ensures that inputs that are valid json, but not key/value config, are rejected. E.g. something like just true is valid JSON (a boolean), but it wouldn't be valid configuration. Enforcing an object ensures that we have something that resembles a configuration/values files, e.g. key: value.

We should also think about empty object mechanics here, e.g. {}. Maybe that should be treated as no configuration and defaults used (cc @joelanford wdyt?).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a comment about the key/value config.

We should also think about empty object mechanics here, e.g. {}. Maybe that should be treated as no configuration and defaults used

Or should we prevent empty objects in the first place?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's my initial instinct. I just don't know if there's an use-case where you'd want to specifically set an empty configuration. I can't think of one, but that doesn't mean there isn't one XD

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've updated the doc strings upstream, so we may want to update this section a little for those changes (although I'm still waiting on an approval for that PR) ref

Or, just put a link to the upstream definition at main and say that this excerpt is at the time or writing and that doc strings may vary (but the structure itself it this - unless new types are added)

@anik120 anik120 force-pushed the olmv1-single-own-namespace branch from 8d51ffd to a93ffa3 Compare October 13, 2025 16:05
- Re-introducing multi-tenancy features or supporting multiple installations of the same operator
- Supporting MultiNamespace install mode (watching multiple namespaces)
- Modifying the fundamental OLM v1 architecture or adding complex multi-tenancy logic
- Supporting install mode switching after initial installation as a first-class feature
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we are treating this as configuration, I think we would support install mode switching, right? Because, more generally, we're allowing configuration to be changed.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@perdasilva could you chime in here (I'm going off of Per's instruction to include this point here)

Copy link

@perdasilva perdasilva Oct 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there's a distinction that could be better worded here. Supporting install mode switching could mean a couple of things:

  1. you can change the value of/remove watchNamespace
  2. that change will work

I think 1 is in scope and 2 is not. It would be up to the author to make that happen, or to be able to recover from that switch. OLMv1 will just compute the correct resources and pivot towards them.

## Proposal

Update the OLMv1 operator-controller to:
1. Validate bundle compatibility with the requested install mode during resolution
Copy link
Member

@joelanford joelanford Oct 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Terminology is important. Correct me if I'm misinterpreting. Resolution is the process by which we choose a bundle to install/upgrade to. The process of consuming the configuration and applying to the bundle in order to derive plain manifests happens after resolution. The resolver is not aware of the bundle's configuration schema and will not try to choose a bundle based on valid alignment of watchNamespace and install mode support.

Suggested change
1. Validate bundle compatibility with the requested install mode during resolution
1. Validate bundle compatibility with the requested install mode after resolution

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't thinking about dependency resolution here. What I meant was "reconciliation". Updating the line with "during reconciliation", wdyt?

@anik120 anik120 force-pushed the olmv1-single-own-namespace branch from 68bd95b to 8462f1b Compare October 20, 2025 18:14
@anik120 anik120 force-pushed the olmv1-single-own-namespace branch from 8462f1b to 2f681d6 Compare October 21, 2025 13:56
@anik120 anik120 force-pushed the olmv1-single-own-namespace branch from 2f681d6 to 283340b Compare October 22, 2025 13:05
@anik120
Copy link
Author

anik120 commented Oct 22, 2025

@perdasilva added a new commit to incorporate operator-framework/operator-controller#2283, PTAL.

@perdasilva
Copy link

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Oct 22, 2025
Comment on lines +244 to +247
config:
inline:
watchNamespace: argocd-pipelines
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While I get the appeal of a configuration option like this - and it could make sense when you have arbitrary configuration on a case-by-case basis (like if you used a Helm values file) - I'm not quite sure this particular case is best suited for an arbitrary configuration object field.

My biggest concern here is that for the primary use case you've called out here, you've made a configuration option that is entirely validated at runtime instead of at admission time. This gives a delayed feedback cycle and is generally a worse user experience.

This also seems more like an "installation" configuration as opposed to your future cases which I see more like "operand" configurations. Put more simply, this reads as a "install the $thing in this way" vs "pass this configuration to $thing".

Have you considered providing a more explicit configuration field for this legacy install mode concept?

I suspect you'll end up having to carry this around for a while, but you may be able to influence people to use the more modern installation approach by literally naming the configuration field as such.

I recall there is an existing .spec.install field in the ClusterExtension API. As an example, what if you did something like:

spec:
  install:
    legacyConfig:
      mode: SingleNamespace
      singleNamespace:
        watchNamespace: something

and

spec:
  install:
    legacyConfig:
      mode: OwnNamespace
      # no discriminant because OwnNamespace requires installation namespace.

AllNamespace, being the desired long-term default can be what the default behavior is if legacyConfig is not specified.

This pattern allows you to be more explicit up front as to what is and is not a valid installation configuration for the chosen legacy installation mode.

You may still end up with runtime side validations that need to happen during bundle resolution to determine whether or not there is a valid bundle that supports the chosen installation option, but you limit it to that instead of all your validations becoming runtime validations.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another thought to layer on top here, this type of configuration also allows you to make this legacy installation mode option something that is considered in bundle resolution.

Instead of selecting a bundle to install and then identifying that it doesn't support this installation mode you can include the installation mode in your resolution criteria, returning a resolution error if no bundles match those criteria.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, we're aware of the runtime validation considerations of this approach. It also wasn't our favorite. But, we'd like to treat these concerns as configuration and not really have the user thinking in terms of legacy/new, registry+v1, helm, registry+vx, etc. or have peculiarities of those different formats bleed through the interface. Once we have the new bundle format, it will likely look at lot more like the Helm case, anyway. We're also trying to be careful and avoiding importing v0 verbiage into v1 - especially "install modes" which dovetails with v0's multi-tenancy feature (which we are trying to stay as far as possible from). Ultimately, from the product's perspective, configuration is defined by the bundle through an optional schema that is validated at runtime, and registry+v1 is just another bundle format.

Copy link

@perdasilva perdasilva Oct 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This also seems more like an "installation" configuration as opposed to your future cases which I see more like "operand" configurations.

We may have poorly expressed our intention here, but I don't think we'll ever have operand configuration. Bundle configuration will always be an installation concern. The configuration is used to mutate the set of manifests that compose the application in some author defined way.

e.g. our nearest future case is SubscriptionConfig like configuration surface that can be used to modify the operator deployment manifest (add volumes, env vars, etc.)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We may have poorly expressed our intention here, but I don't think we'll ever have operand configuration. Bundle configuration will always be an installation concern. The configuration is used to mutate the set of manifests that compose the application in some author defined way.

Sure. I think the "operand" vs "install" configuration is probably something I got hung up on and isn't actually applicable here. I can see the appeal of having a singular "configuration" field here for UX simplicity.

Yeah, we're aware of the runtime validation considerations of this approach. It also wasn't our favorite. But, we'd like to treat these concerns as configuration and not really have the user thinking in terms of legacy/new, registry+v1, helm, registry+vx, etc. or have peculiarities of those different formats bleed through the interface. Once we have the new bundle format, it will likely look at lot more like the Helm case, anyway. We're also trying to be careful and avoiding importing v0 verbiage into v1 - especially "install modes" which dovetails with v0's multi-tenancy feature (which we are trying to stay as far as possible from). Ultimately, from the product's perspective, configuration is defined by the bundle through an optional schema that is validated at runtime, and registry+v1 is just another bundle format

My confusion here is that you are saying that you want the configuration schema to be handled by the bundle, but OLM is the package manager. It defines the pattern in which things are packaged and as such you likely have control over how that configuration is handled.

If you know ahead of time the bundling formats you are going to support - and thus the general structure that those blobs will take - why not include those as explicit configuration formats that can be validated earlier in the loop where you can?

In this initial use case, you have a prime example of knowing installation scenarios where the watchNamespace is/is not required. You can enforce that behavior instead of having a swath of new runtime failure modes.

I still think something like:

spec:
  config:
    type: LegacySubscription
    legacySubscription:
      mode: SingleNamespace
      singleNamespace:
        watchNamespace: somenamespace

gives you the ability to add future configuration formats in the future that allows you to validate early when there is a configuration schema known ahead of time but doesn't preclude you from doing a delayed validation flow in the future when you support a format that doesn't have a known schema ahead of time.

For example, if you later support a Helm values file the shape could evolve to look something like:

spec:
  config:
    type: HelmValues
    helmValues:
      - key: some
         value: thing

where you have arbitrary key-value pairs that you pass into your templating engine.

I also think that the structured configuration approach gives you the ability to return a more targeted error message during bundle resolution if the bundle doesn't support that configuration format. If a user has declared the intent to install a bundle using that configuration it seems odd to me that you may resolve to a bundle that doesn't declare support for that general configuration format and then attempt to configure it.

Copy link
Contributor

@everettraven everettraven Oct 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is a core design decision of OLMv1 to ensure that watchNamespace is never a first class field in our API. See https://operator-framework.github.io/operator-controller/project/olmv1_design_decisions/#watched-namespaces-cannot-be-configured-in-a-first-class-api. Putting it in a structured field in the API would make it a first class field. Putting any reference to watchNamespace or installMode in our CRD is a non-starter.

Then you should not be supporting this functionality? If you want to support this functionality, I expect the configuration option to be made available in a supported way.

Discourage it's use all you want, name it Legacy to make users explicitly aware through naming they are using an old pattern. Continue limiting installations to one ClusterExtension per package. Making it a "first class api" doesn't mean you have to roll back your stance on multi-tenancy.

The core tenant of this API design is that bundles will define their contents and configuration schemas.

Do bundles do this today, or is it currently the responsibility of OLM to do the translation?

I think the main flaw of the argument for admission-time validation is that OLM and the user know what the format of the resolved bundle will be prior to resolution occurring, which happens during reconciliation of the ClusterExtension

Just to clarify - I am not fully against runtime validations. I accept that even if you implement admission-time validation there will have to be runtime validation as well.

My pushback here is specifically around having only an opaque configuration, when you don't know if you will support multiple formats, being a poor user experience because a user has no way to know whether or not OLM is going to attempt to install/upgrade in a way that respects that configuration. I outlined two situations above:

  1. A user defines the configuration for the initial installation version and a future version of the package uses your latest and greatest bundling format. OLM attempts an automatic upgrade that fails. OLM had no knowledge of whether or not the existing configuration would have been compatible for the upgrade. OLM probably should not have even attempted this upgrade.
  2. Another upgrade scenario that shouldn't be possible (not certain whether or not it is with this proposal): A user defined the configuration, OLM does an automatic upgrade, the configuration is not the right structure and gets ignored. User-defined configuration is no longer respected and their workflows break

What are your thoughts on these situations?

Even if we could build an admission webhook that performs resolution and schema validation at admission time, that wouldn't remove the need for runtime validation because:

  1. The admission-time resolution might be different than the reconcile-time resolution, which means the admission time validation may be invalid by the time we reconcile and actually resolve.
  2. The ClusterExtension can represent a range of versions that may not even exist yet. While it may be technically possible in the future to assess validity of the configuration based on the schemas of all of the bundles that match the filter criteria, it is impossible to assess configuration validity for yet unpublished bundles that will fall into the configured range in the future

I'm not saying you have to go and perform bundle resolution for admission validation, but if you are wanting to be declarative and avoid the scenarios I've outlined above a user defining their configuration format like:

apiVersion: olm.operatorframework.io/v1
kind: ClusterExtension
metadata:
  name: argocd
spec:
  namespace: argocd
  serviceAccount:
    name: argocd-installer
  config:
    type: LegacySubscription
    legacySubscription:
      mode: SingleNamespace
      singleNamespace:
        watchNamespace: argocd-watch
  source:
    sourceType: Catalog
    catalog:
      packageName: argocd-operator
      version: 0.6.0

would essentially mean "Install the argocd-operator package at version 0.6.0 that supports the legacy Subscription configuration format". If OLM doesn't resolve a bundle that matches that criteria, it fails resolution.

Any upgrades would be limited to the same resolution criteria so you never encounter a situation where a previous configuration is no longer respected.

We know from the experience of OLMv0 that restrictive and opinionated configuration schemas defined by the OLM team ultimately cause problems when customers ask extension authors to deliver a new feature related to how the operator itself is configured

No one is asking you to define a restrictive and opinionated configuration schema for OLMv1. If you later support another bundle format you also pick up the configuration schema there. If that new bundle format doesn't have an opinionated schema, neither do you. The configuration schema for the registry+v1 bundle format you support is restrictive and opinionated, so you do have an opinionated configuration stance for that bundle format.

You are probably going to be bound to this until you support a new bundle format.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do think that pushing the responsibility of configuration, templating, etc. to the bundle is a reasonable direction forward, but I think you need an answer for what that actually looks like before designing an API around it and how OLM is going to behave in that new world.

To me, it doesn't seem reasonable to have a singular opaque configuration field in a world where you support multiple distinct bundle formats for packaging.

In a world where you support a single bundle format where the configuration process is entirely offloaded to the bundle, I think it is reasonable.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reject the idea of a discriminated union for different bundle formats because of the major friction point that would add for users to move off of our legacy bundle format and onto Helm

I'm curious, what is the major friction point in switching the configuration in the ClusterExtension API?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm gonna jump off this thread. I don't think this back and forth is solving anything. Let's go the synchronous route. From our perspective, the earlier the better. But, if the best time for you is next Tuesday at the office hours, let's just do that...

Seeing as it is currently Shift Week, next Tuesday would be preferable.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Had a chat with both @joelanford and @perdasilva to hash this out synchronously.

With a better understanding of the direction you folks are heading and having addressed my concerns about certain scenarios - if you folks feel as though this opaque configuration is what provides the best experience for your users and are ready to support it along with any challenges that come along in the future I won't stop it.

Please make sure that you update the EP as discussed to include:

  • An explicit statement about the future you are trying to achieve and how this is the best path to achieving that
  • How to ensure customers encountering issues here are getting the right support (i.e who is responsible for what failure modes)
  • How users can prevent and/or remediate bad states caused by either:
    • a misconfiguration
    • a scenario where an end user had a valid configuration and an automatic upgrade happens where the configuration is now invalid and blocked
    • a scenario where a user had a valid configuration and an automatic upgrade happens where the previous configuration is no longer respected (i.e their workflows are now broken because their intended configuration is no longer in place)

- **Configuration Migration**: No automatic migration; users must explicitly install using OLMv1 ClusterExtension and configure `watchNamespace`

### Downgrade Strategy
- **Feature Gate Disable**: Disabling the feature gate prevents new Single/OwnNamespace installations
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Customers do not have direct control of OpenShift features at a granular level. They can choose to flip to an unsupported mode that enables the ones in that level of maturity.

The goal of this section is to understand how, if for some reason, a customer would need to go about downgrading to a version where your feature is no longer enabled by default.

In my experience, for features that involve new opt-in API fields this ends up being something like:

  • unset new fields
  • downgrade

In your case, I'd include some considerations for how OLM will handle a ClusterExtension using a field it doesn't know about any more on a downgrade and what things a customer doing a downgrade may need to consider before going through with it.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's as you describe. Ultimately, if a customer upgrades, makes use of the new field, then decided to downgrade, they will need to unset the new fields.

Going forward, as the registry+v1 configuration surface increases, it would be much of the same. After downgrade you'd get some runtime config schema violations like "unknown field ..." if you don't.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@everettraven @perdasilva rewrote this section PTAL.

Looked at the CRD and looks like there's

inline:
    type: object
    x-kubernetes-preserve-unknown-fields: true

for the .spec.config field.

**Installation Success Rate:**

* **RBAC Validation Complexity:** Namespace-scoped installations require more complex RBAC validation to ensure the ServiceAccount has appropriate permissions for the target namespace. RBAC misconfigurations that work in AllNamespaces mode may fail in Single/OwnNamespace modes.
* Example: ServiceAccount has cluster-wide read permissions but lacks namespace-specific write permissions, causing installation to fail.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this the OLM service account or operand service account you are referring to?

If it is the OLM service account this sounds true regardless of namespace-scoped vs cluster-scoped installations because any extension in this context inherently requires stamping out some kind of namespaced resources.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the operand's service account.

Comment on lines +493 to +504
**Configuration Issues:**
- Invalid watchNamespace specification (DNS1123 validation failures)
- Target namespace doesn't exist or isn't accessible
- ServiceAccount lacks sufficient permissions for namespace access
- Bundle configuration does not include `watchNamespace`

**Runtime Issues:**
- Operator deployed in install namespace but cannot access watch namespace
- RBAC resources incorrectly scoped for actual operator requirements
- Network policies preventing cross-namespace access when needed
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How are end-users going to be made aware of these issues?

Copy link
Author

@anik120 anik120 Oct 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct me if I'm wrong @perdasilva, but the configuration issues are being surfaced in the ClusterExtension in the status section.

For runtime errors, those are Extension concern. OLMv1 will install the ClusterExtension as long as there's no configuration issues. Extensions have to surface any runtime issues, eg namespace unreached due to restricted NP issues etc.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's correct

- Modify watchNamespace configuration to change install mode
- Scale down operator-controller to manually intervene if needed

## Version Skew Strategy
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It sounds like this is isolated to a singular component managed by a singular cluster operator (cluster-olm-operator) and isn't prone to a state where you may need to deal with one component of the system being on a newer version than another component of the system?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a line saying we don't need a strategy because of the reason you mentioned, thanks for the pointer.


### Regression Tests
- **Conversion Compatibility**: Ensure generated manifests match OLM v0 output for equivalent configurations
- **Feature Gate Toggle**: Verify behavior when feature gate is disabled
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just the existing tests no?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good point. Also I just realized that we haven't actually talked about any regression tests yet, so I'm going to leave this section blank instead.

**Description**: Implement full OLM v0 install mode compatibility including MultiNamespace.

**Why Not Selected**:
- Would reintroduce the multi-tenancy complexity that OLM v1 explicitly avoided
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How so? Is this not just configuring the singular instance of an operator to watch more than a singular namespace but less than the whole cluster?

You don't have to allow more than a single installation just because you support configuring an installation with the ability to watch N namespaces.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the issue is it starts to give people the impression we're going, or could go, down that way. Even this Single/OwnNamespace was already a bit of a stretch for us and we only did it because there because it was demanded by the business - and I hope will go away in time =S

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You already have a hard stance you will not go down that way?

How much of the possible extensions one might install on the cluster support this installation mode? How many only support this installation mode?

Copy link

@perdasilva perdasilva Oct 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You already have a hard stance you will not go down that way?

That's certainly my understanding. This is not something we want to support in v1. We want to distance ourselves from these concerns as fast as possible and give them to the author.

So, maybe we just change the reasoning to reflect that. "OLMv1 does not want to support this use-case as it sees it as a configuration concern defined by the package author"

@everettraven
Copy link
Contributor

Also looks to be some linting issues that need addressing.

@anik120 anik120 force-pushed the olmv1-single-own-namespace branch from cf1d46b to 9d8479b Compare October 27, 2025 19:14
@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Oct 27, 2025
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 27, 2025

New changes are detected. LGTM label has been removed.

@anik120 anik120 force-pushed the olmv1-single-own-namespace branch 5 times, most recently from 60d6fd0 to e7e62c3 Compare October 27, 2025 20:56
@anik120 anik120 force-pushed the olmv1-single-own-namespace branch from e7e62c3 to 3c1e77e Compare October 27, 2025 21:01
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 27, 2025

@anik120: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

jira/valid-reference Indicates that this PR references a valid Jira ticket of any type.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants