Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prow job labeled with prow version #21054

Merged
merged 2 commits into from
Mar 1, 2021
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions prow/kube/prowjob.go
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,9 @@ const (
// job names can be arbitrarily long, this is added as
// an annotation instead of a label.
ProwJobAnnotation = "prow.k8s.io/job"
// ProwVersionLabel is added in resources created by prow and
// carries the version of prow that decorated this job.
ProwVersionLabel = "prow.k8s.io/version"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We deploy all prow components as a bunch of micro services in lockstep today. It may be worth considering the versions of the components themselves though.

In terms of components that would be relevant to why pods landed on a cluster and how they ended up in their current form. Ideally we could have:

  • prow.k8s.io/hook-version is responsible for triggering a job in response to GitHub events
  • prow.k8s.io/plank-version is responsible for making a pod out of a prowjob
  • prow is using a specific config when creating this pod (sha of the repo from which the config was deployed? hash of configmap contents? revision of the configmap resource?), or maybe specific component and job configs

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A better approach to this would be leverage the managedFields. We would need to figure out how to set a correct fieldmanager (xref kubernetes-sigs/controller-runtime#1215), then something like kubect-blame can be used to figure out what component in what version set which fields. Requires kube 1.18 though.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although hm, they are intended as user identifier in server side apply, which means if we put a version inside that, we would prevent ourselves from starting to use server side apply, if we ever wanted.

Copy link
Member

@spiffxp spiffxp Feb 27, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The nice thing about labels in my experience is they automatically get attached to metrics within google cloud monitoring, and I'm pretty sure kubernetes/kube-state-metrics does the same.

The more I think about it,prow.k8s.io/plank-version makes sense to start with, as that is the thing creating the pod. Some time later I could see prow.k8s.io/hook-version getting attached to ProwJobs created by hook, and propagating through (since it's entirely possible for ProwJobs to be created by something other than hook)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point and done

// OrgLabel is added in resources created by prow and
// carries the org associated with the job, eg kubernetes-sigs.
OrgLabel = "prow.k8s.io/refs.org"
Expand Down
2 changes: 2 additions & 0 deletions prow/plank/reconciler.go
Original file line number Diff line number Diff line change
Expand Up @@ -604,6 +604,8 @@ func (r *reconciler) startPod(ctx context.Context, pj *prowv1.ProwJob) (string,
return "", "", err
}
pod.Namespace = r.config().PodNamespace
// Add prow version as a label for better debugging prowjobs.
pod.ObjectMeta.Labels[kube.ProwVersionLabel] = version.Version

client, ok := r.buildClients[pj.ClusterAlias()]
if !ok {
Expand Down