Skip to content

Conversation

blaketastic2
Copy link
Contributor

What this PR does / why we need it:

This allows us to set nodeSelectors on the services in the operator, allowing greater deployment control.

Which issue(s) this PR fixes:

None

Misc

None

Signed-off-by: Blake <blaketastic2@gmail.com>
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds support for configuring nodeSelector on Feast service deployments within the Kubernetes operator, enabling users to control pod placement on specific nodes.

Key Changes:

  • Added nodeSelector field to OptionalCtrConfigs struct in the API
  • Implemented logic to apply node selectors to pod specifications
  • Added comprehensive test coverage for node selector behavior

Reviewed Changes

Copilot reviewed 6 out of 7 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
api/v1alpha1/featurestore_types.go Added NodeSelector field to OptionalCtrConfigs struct
api/v1alpha1/zz_generated.deepcopy.go Generated deep copy logic for the new NodeSelector field
config/crd/bases/feast.dev_featurestores.yaml Updated CRD schema to include nodeSelector field definitions
docs/api/markdown/ref.md Updated API documentation to reflect new nodeSelector field
internal/controller/services/services.go Implemented node selector retrieval and application logic
internal/controller/services/services_test.go Added comprehensive test cases for node selector functionality

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

Signed-off-by: Blake <blaketastic2@gmail.com>
Signed-off-by: Blake <blaketastic2@gmail.com>
Signed-off-by: Blake <blaketastic2@gmail.com>
Signed-off-by: Blake <blaketastic2@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants