diff --git a/keps/sig-apps/4443-configurable-pod-failure-policy-reasons/README.md b/keps/sig-apps/4443-configurable-pod-failure-policy-reasons/README.md new file mode 100644 index 000000000000..8cde8fd083e1 --- /dev/null +++ b/keps/sig-apps/4443-configurable-pod-failure-policy-reasons/README.md @@ -0,0 +1,927 @@ + +# KEP-4443: Configurable PodFailurePolicy Reason + + + + + + +- [Release Signoff Checklist](#release-signoff-checklist) +- [Summary](#summary) +- [Motivation](#motivation) + - [Goals](#goals) + - [Non-Goals](#non-goals) +- [Proposal](#proposal) + - [User Stories (Optional)](#user-stories-optional) + - [Story 1](#story-1) + - [Story 2](#story-2) + - [Notes/Constraints/Caveats (Optional)](#notesconstraintscaveats-optional) + - [Risks and Mitigations](#risks-and-mitigations) +- [Design Details](#design-details) + - [Test Plan](#test-plan) + - [Prerequisite testing updates](#prerequisite-testing-updates) + - [Unit tests](#unit-tests) + - [Integration tests](#integration-tests) + - [e2e tests](#e2e-tests) + - [Graduation Criteria](#graduation-criteria) + - [Upgrade / Downgrade Strategy](#upgrade--downgrade-strategy) + - [Version Skew Strategy](#version-skew-strategy) +- [Production Readiness Review Questionnaire](#production-readiness-review-questionnaire) + - [Feature Enablement and Rollback](#feature-enablement-and-rollback) + - [Rollout, Upgrade and Rollback Planning](#rollout-upgrade-and-rollback-planning) + - [Monitoring Requirements](#monitoring-requirements) + - [Dependencies](#dependencies) + - [Scalability](#scalability) + - [Troubleshooting](#troubleshooting) +- [Implementation History](#implementation-history) +- [Drawbacks](#drawbacks) +- [Alternatives](#alternatives) +- [Infrastructure Needed (Optional)](#infrastructure-needed-optional) + + +## Release Signoff Checklist + + + +Items marked with (R) are required *prior to targeting to a milestone / release*. + +- [ ] (R) Enhancement issue in release milestone, which links to KEP dir in [kubernetes/enhancements] (not the initial KEP PR) +- [ ] (R) KEP approvers have approved the KEP status as `implementable` +- [ ] (R) Design details are appropriately documented +- [ ] (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input (including test refactors) + - [ ] e2e Tests for all Beta API Operations (endpoints) + - [ ] (R) Ensure GA e2e tests meet requirements for [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md) + - [ ] (R) Minimum Two Week Window for GA e2e tests to prove flake free +- [ ] (R) Graduation criteria is in place + - [ ] (R) [all GA Endpoints](https://github.com/kubernetes/community/pull/1806) must be hit by [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md) +- [ ] (R) Production readiness review completed +- [ ] (R) Production readiness review approved +- [ ] "Implementation History" section is up-to-date for milestone +- [ ] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io] +- [ ] Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes + + + +[kubernetes.io]: https://kubernetes.io/ +[kubernetes/enhancements]: https://git.k8s.io/enhancements +[kubernetes/kubernetes]: https://git.k8s.io/kubernetes +[kubernetes/website]: https://git.k8s.io/website + +## Summary + +This KEP proposes to extend the Job API by adding an optional `Reason` field to `PodFailurePolicyRule`, which if specified, would be included as the reason in the `JobFailed` condition upon Job failure triggered by a `PodFailurePolicy`. + +## Motivation + +Higher level APIs, such as [JobSet](https://sigs.k8s.io/jobset), use the Job API as a building block for orchestrating large, distributed workloads. +These higher level APIs need to be able to distinguish between different types of Job failures in order to make informed decisions about how to react +to them. Currently, no mechanism exists in the Job API to propagate granular failure reason information (e.g., container exit codes) up to be +programmatically consumed by higher level software managing Jobs. A `PodFailurePolicy` can be configured to add a `Reason` of `PodFailurePolicy` to +the `JobFailed` condition added to the Job when it fails, but different pod failure policies targeting different container exit codes all use the +same `Reason` of `PodFailurePolicy`. This prevents higher level APIs like JobSet from distinguishing them and being able to take different actions +depending on the type of Job failure that occurred. + +For a concrete use case, see the JobSet [Configurable Failury Policy KEP](https://github.com/kubernetes-sigs/jobset/pull/381) which illuminated the need for more granular pod failure policy reasons. + +### Goals + +For pod failure policies to be able communicate different failure types to higher level APIs. + +### Non-Goals + + + +## Proposal + +The proposal is to add an optional `Reason` field to the `PodFailurePolicyRule`. +If unset, it will default to `PodFailurePolicy`, which is the current [reason](https://sourcegraph.com/github.com/kubernetes/kubernetes@6a4e93e776a35d14a61244185c848c3b5832621c/-/blob/staging/src/k8s.io/api/batch/v1/types.go?L542) the Job +controller uses when a PodFailurePolicy triggers a Job failure. + +When a `PodFailurePolicyRule` matches a pod failure and the `Action` is `FailJob`, the Job +controller will add the reason defined in the `Reason` field to the JobFailed [condition](https://sourcegraph.com/github.com/kubernetes/kubernetes@6a4e93e776a35d14a61244185c848c3b5832621c/-/blob/pkg/controller/job/job_controller.go?L816) added +to the Job. + +### User Stories (Optional) + + + +#### Story 1 + +As a user, I am using a JobSet to manage a group of jobs, each running a HPC simulation. +Each job runs a simulation with different random initial parameters. When a simulation ends, the +application will exit with one of two exit codes: + +- Exit code 2, which indicates the simulation produced an invalid result due to bad starting parameters, and should +not be retried. +- Exit code 3, which indicates the simulation produced an invalid result but the initial parameters were reasonable, +so the simulation should be restarted. + +When a Job fails due to a pod failing with exit code 2, I want my job management software to leave the Job in +a failed state. + +When a Job fails due to a pod failing with exit code 3, I want my job management software to to restart the Job. + +**Example JobSet with a Pod Failure Policy configuration for this use case**: +```yaml +apiVersion: jobset.x-k8s.io/v1alpha2 +kind: JobSet +metadata: + name: restart-job-example + annotations: + alpha.jobset.sigs.k8s.io/exclusive-topology: {{topologyDomain}} # 1:1 job replica to topology domain assignment +spec: + failurePolicy: + rules: + # If Job fails due to a pod failing with exit code 2, leave it in a failed state. + - action: FailJob + targetReplicatedJobs: + - simulations + onJobFailureReasons: + - ExitCode2 + # If Job fails due to a pod failing with exit code 3, restart that Job. + - action: RestartJob + targetReplicatedJobs: + - simulations + onJobFailureReasons: + - ExitCode3 + maxRestarts: 10 + replicatedJobs: + - name: simulations + replicas: 10 + template: + spec: + parallelism: 1 + completions: 1 + backoffLimit: 0 + # If a pod fails with exit code 2 or 3, fail the Job, using the user-defined reason. + podFailurePolicy: + rules: + - action: FailJob + onExitCodes: + containerName: main + operator: In + values: [2] + reason: "ExitCode2" + - action: FailJob + onExitCodes: + containerName: main + operator: In + values: [3] + reason: "ExitCode3" + template: + spec: + restartPolicy: Never + containers: + - name: main + image: python:3.10 + command: ["..."] +``` + +**Alternative example: single Job with a Pod Failure Policy configuration for this use case**: + +```yaml +apiVersion: batch/v1 +kind: Job +metadata: + name: simulation +spec: + parallelism: 1 + completions: 1 + backoffLimit: 0 + # If a pod fails with exit code 2 or 3, fail the Job, using the user-defined reason. + podFailurePolicy: + rules: + - action: FailJob + onExitCodes: + containerName: main + operator: In + values: [2] + reason: "ExitCode2" + - action: FailJob + onExitCodes: + containerName: main + operator: In + values: [3] + reason: "ExitCode3" + template: + spec: + restartPolicy: Never + containers: + - name: main + image: python:3.10 + command: ["..."] +``` + +### Notes/Constraints/Caveats (Optional) + + + +### Risks and Mitigations + + + +I don't see any notable risks for this feature. + + +## Design Details + +If unset, it will default to `PodFailurePolicy`, which is the current [reason](https://sourcegraph.com/github.com/kubernetes/kubernetes@6a4e93e776a35d14a61244185c848c3b5832621c/-/blob/staging/src/k8s.io/api/batch/v1/types.go?L542) the Job +controller uses when a PodFailurePolicy triggers a Job failure. + +When a `PodFailurePolicyRule` matches a pod failure and the `Action` is `FailJob`, the Job +controller will add the reason defined in the `Reason` field to the JobFailed [condition](https://sourcegraph.com/github.com/kubernetes/kubernetes@6a4e93e776a35d14a61244185c848c3b5832621c/-/blob/pkg/controller/job/job_controller.go?L816) added +to the Job. + +### Test Plan + +This feature can be tested via unit tests. We don't need integration tests or e2e tests for this. + +[X] I/we understand the owners of the involved components may require updates to +existing tests to make this code solid enough prior to committing the changes necessary +to implement this enhancement. + +##### Prerequisite testing updates + + + +##### Unit tests + + + + + +- `k8s.io/kubernetes/pkg/controller/job`: `02/05/2024` - `` + +##### Integration tests + + + + + + +- When feature flag is enabled and a Job's PodFailurePolicy triggers a Job failure, due to a +matching PodFailurePolicyRule with the `Reason` field defined, check that the `JobFailed` +condition has the user-specified `Reason` set on it correctly. + +##### e2e tests + + + + + +### Graduation Criteria + + +#### Alpha + +- Feature implemented behind a feature flag +- Initial unit and integration tests are implemented + +### Upgrade / Downgrade Strategy + + +After a user upgrades their cluster to a k8s version which supports this feature, +the user can use this feature by simply specifying the new field in their podFailurePolicy +config. + +When a user downgrades from a k8s version that supports this field to one that does +not support this field: +- for existing Jobs, this new field will be ignored by the Job controller, +resulting in the `Reason` being set to the previous default of `PodFailurePolicy` +for any Job failures triggered by a pod failure policy. +- for new Jobs, the kube-apiserver would remove this field when the Job is submitted. + +### Version Skew Strategy + + + +N/A. This feature doesn't require coordination between control plane components, +the changes to the Job controller are self-contained. + +## Production Readiness Review Questionnaire + + + +### Feature Enablement and Rollback + + +- Upgrade to k8s version 1.30+ +- Enable feature flag `PodFailurePolicyReason` + +###### How can this feature be enabled / disabled in a live cluster? + + + +- [X] Feature gate (also fill in values in `kep.yaml`) + - Feature gate name: `PodFailurePolicyReason` + - Components depending on the feature gate: + - kube-controller-manager +- [ ] Other + - Describe the mechanism: + - Will enabling / disabling the feature require downtime of the control + plane? + - Will enabling / disabling the feature require downtime or reprovisioning + of a node? + +###### Does enabling the feature change any default behavior? + + +No + +###### Can the feature be disabled once it has been enabled (i.e. can we roll back the enablement)? + + +Yes, by disabling the feature flag `PodFailurePolicyReason`. + +###### What happens if we reenable the feature if it was previously rolled back? + +For new Jobs, the apiserver will stop wiping out the new field. +For existing Jobs, the Job controller will stop ignoring the new field, and begin +using it as described in previous sections. + +###### Are there any tests for feature enablement/disablement? + + +We can add unit tests for: +- feature enabled and field set +- feature disabled and field set + +### Rollout, Upgrade and Rollback Planning + + + +###### How can a rollout or rollback fail? Can it impact already running workloads? + + + +###### What specific metrics should inform a rollback? + + + +###### Were upgrade and rollback tested? Was the upgrade->downgrade->upgrade path tested? + + + +###### Is the rollout accompanied by any deprecations and/or removals of features, APIs, fields of API types, flags, etc.? + + + +### Monitoring Requirements + + + +###### How can an operator determine if the feature is in use by workloads? + + + +###### How can someone using this feature know that it is working for their instance? + + + +- [ ] Events + - Event Reason: +- [ ] API .status + - Condition name: + - Other field: +- [ ] Other (treat as last resort) + - Details: + +###### What are the reasonable SLOs (Service Level Objectives) for the enhancement? + + + +###### What are the SLIs (Service Level Indicators) an operator can use to determine the health of the service? + + + +- [ ] Metrics + - Metric name: + - [Optional] Aggregation method: + - Components exposing the metric: +- [ ] Other (treat as last resort) + - Details: + +###### Are there any missing metrics that would be useful to have to improve observability of this feature? + + + +### Dependencies + + + +###### Does this feature depend on any specific services running in the cluster? + + + +### Scalability + + + +###### Will enabling / using this feature result in any new API calls? + + +No + +###### Will enabling / using this feature result in introducing new API types? + + +No, just a new string field on an existing API type. + +###### Will enabling / using this feature result in any new calls to the cloud provider? + + +No + +###### Will enabling / using this feature result in increasing size or count of the existing API objects? + + +If the optional `Reason` field is specified, the podFailurePolicy object size will increase by 1 byte per +character in the `Reason` string. + +###### Will enabling / using this feature result in increasing time taken by any operations covered by existing SLIs/SLOs? + + +No + +###### Will enabling / using this feature result in non-negligible increase of resource usage (CPU, RAM, disk, IO, ...) in any components? + + +No + +###### Can enabling / using this feature result in resource exhaustion of some node resources (PIDs, sockets, inodes, etc.)? + + +No + +### Troubleshooting + + + +###### How does this feature react if the API server and/or etcd is unavailable? + +###### What are other known failure modes? + + + +###### What steps should be taken if SLOs are not being met to determine the problem? + +## Implementation History + + +- KEP Published: 02/05/2024 + +## Drawbacks + +None + +## Alternatives + +Rather than having the user specify a custom `Reason`, which will require validation and an additional step +in the user-experience, instead the Job controller could automatically generate reasons, using a determnistic +format which depends on if the podFailurePolicy was triggered by the `onPodConditions` or `onContainerExitCodes`. + +For example, in the case of container exit codes, we could append the container exit code to +the default reason (e.g., `PodFailurePolicy-ExitCode3`). For `onPodConditions`, it could be the pod condition +type and status joined together (e.g., `PodFailurePolicy-{type}-{status}`). However, this limits flexibility +for the user, and locks us in to supporting this concrete, specific behavior, rather than the more generic +behavior resulting from the optional `Reason` field set by the user. + +## Infrastructure Needed (Optional) + + diff --git a/keps/sig-apps/4443-configurable-pod-failure-policy-reasons/kep.yaml b/keps/sig-apps/4443-configurable-pod-failure-policy-reasons/kep.yaml new file mode 100644 index 000000000000..6f2335ed06cf --- /dev/null +++ b/keps/sig-apps/4443-configurable-pod-failure-policy-reasons/kep.yaml @@ -0,0 +1,41 @@ +title: KEP Template +kep-number: 4443 +authors: + - "@danielvegamyhre" +owning-sig: sig-apps +status: provisional +creation-date: 2024-01-26 +reviewers: + - "@ahg-g" + - "@kannon92" +approvers: + - "@alculquicondor" + - "@msau42" + +see-also: + - "https://github.com/kubernetes-sigs/jobset/pull/381" + +# The target maturity stage in the current dev cycle for this KEP. +stage: alpha + +# The most recent milestone for which work toward delivery of this KEP has been +# done. This can be the current (upcoming) milestone, if it is being actively +# worked on. +latest-milestone: "v1.30" + +# The milestone at which this feature was, or is targeted to be, at each stage. +milestone: + alpha: "v1.30" + +# The following PRR answers are required at alpha release +# List the feature gate name and the components for which it must be enabled +feature-gates: + - name: PodFailurePolicyReason + components: + - kube-apiserver + - kube-controller-manager +disable-supported: true + +# # The following PRR answers are required at beta release +# metrics: +# - my_feature_metric