Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Invalid owners files in sigs.yaml #4125

Closed
geekygirldawn opened this issue Sep 30, 2019 · 26 comments
Closed

Invalid owners files in sigs.yaml #4125

geekygirldawn opened this issue Sep 30, 2019 · 26 comments
Assignees
Labels
area/community-management committee/steering Denotes an issue or PR intended to be handled by the steering committee. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling. sig/contributor-experience Categorizes an issue or PR as relevant to SIG Contributor Experience. sig/storage Categorizes an issue or PR as relevant to SIG Storage.
Milestone

Comments

@geekygirldawn
Copy link
Contributor

The following owners files in sigs.yaml are generating 404 errors:

sig-apps
https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/api/core/v1/OWNERS

sig-auth
https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/controller/certificates/approver/OWNERS
https://raw.githubusercontent.com/kubernetes/kubernetes/master/plugin/pkg/admission/imagepolicy/OWNERS

sig-autoscaling
https://raw.githubusercontent.com/kubernetes/client-go/master/scale/OWNERS

sig-storage
https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/csi-api/OWNERS

sig-api-machinery (pull request fix pending #4124)
https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/quota/OWNERS

I wasn't positive which owners files should replace most of these, so if someone could track them down and update sigs.yaml, that would be great. For the sig-api-machinery quota file, I think I found the correct owners file and created pull request #4124 to update just that one owners file in sigs.yaml

@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Sep 30, 2019
@geekygirldawn
Copy link
Contributor Author

geekygirldawn commented Sep 30, 2019

/sig apps
/sig storage
/sig autoscaling
/sig auth

@k8s-ci-robot k8s-ci-robot added sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/storage Categorizes an issue or PR as relevant to SIG Storage. sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling. sig/auth Categorizes an issue or PR as relevant to SIG Auth. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Sep 30, 2019
@k8s-ci-robot
Copy link
Contributor

@geekygirldawn: The label(s) sig/, sig/, sig/ cannot be applied. These labels are supported: api-review, community/discussion, community/maintenance, community/question, cuj/build-train-deploy, cuj/multi-user, platform/aws, platform/azure, platform/gcp, platform/minikube, platform/other

In response to this:

/sig apps
/sig storage
/sig autoscaling
/sig auth

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@geekygirldawn
Copy link
Contributor Author

/area community-management

@nikhita
Copy link
Member

nikhita commented Sep 30, 2019

Thanks for auditing these, @geekygirldawn :)

if someone could track them down and update sigs.yaml, that would be great.

/help

For anyone who takes it up:

Most of the work involves tracking the OWNERS files and following up with SIGs. One suggestion for tracking these OWNERS files is to do a git log -- <file-name> to get the list of commits that touched the file. The latest commit would have removed the file and would contain more context about the change.

If you have any questions, please feel free to ask in #sig-contribex on the k8s slack. 🌈

@k8s-ci-robot
Copy link
Contributor

@nikhita:
This request has been marked as needing help from a contributor.

Please ensure the request meets the requirements listed here.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.

In response to this:

Thanks for auditing these, @geekygirldawn :)

if someone could track them down and update sigs.yaml, that would be great.

/help

For anyone who takes it up:

Most of the work involves tracking the OWNERS files and following up with SIGs. One suggestion for tracking these OWNERS files is to do a git log -- <file-name> to get the list of commits that touched the file. The latest commit would have removed the file and would contain more context about the change.

If you have any questions, please feel free to ask questions in #sig-contribex on the k8s slack. 🌈

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Sep 30, 2019
@mrbobbytables
Copy link
Member

Thanks @geekygirldawn -- Longer term I think this is something we should look into automate a bit. Walking sigs.yaml and checking for valid OWNERS shouldn't be too bad. Maybe schedule a report to run once every release? If there are any out of date, create a follow-up issue after the release.

@geekygirldawn
Copy link
Contributor Author

Agreed! I was doing a little bit of analysis of owners files, and one of my scripts returned errors on these files, so it should be easy enough to automate :)

@geekygirldawn
Copy link
Contributor Author

geekygirldawn commented Sep 30, 2019

After a bit more digging, I think https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/csi-api/OWNERS was deleted as part of commit d2aa8178f2450b75f75acc2ed8f0a09119a6d9d3
Thu Mar 21 13:19:14 2019 -0700 Remove alpha CRD install.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 29, 2019
@nikhita
Copy link
Member

nikhita commented Dec 29, 2019

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 29, 2019
@mrbobbytables mrbobbytables added the sig/contributor-experience Categorizes an issue or PR as relevant to SIG Contributor Experience. label Mar 11, 2020
@mrbobbytables mrbobbytables added this to the Next milestone Mar 11, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 9, 2020
@mrbobbytables
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 10, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 8, 2020
@markjacksonfishing
Copy link
Contributor

/remove-lifecycle stale

@spiffxp
Copy link
Member

spiffxp commented Feb 2, 2021

#5425 would be a good opportunity to catch these

@liggitt
Copy link
Member

liggitt commented Apr 16, 2021

fixed sig-auth issues

/remove-sig auth

@k8s-ci-robot k8s-ci-robot removed the sig/auth Categorizes an issue or PR as relevant to SIG Auth. label Apr 16, 2021
@mrbobbytables mrbobbytables modified the milestones: v1.21, v1.22 Jun 22, 2021
@ehashman
Copy link
Member

Is this a dupe of #1913 ? Can we close one?

@parispittman
Copy link
Contributor

/remove-community management

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 19, 2021
@dims
Copy link
Member

dims commented Dec 23, 2021

found a few more urls we need to fix

[dims@dims-m1 11:28] ~/go/src/k8s.io/community ⟩ ~/go/src/github.com/dims/maintainers/maintainers check-urls
Running script : 12-23-2021 11:28:41
found invalid url: https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/master/OWNERS (http code: 404) at (114,7)
found invalid url: https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/registry/extensions/OWNERS (http code: 404) at (239,7)
found invalid url: https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/api/core/v1/OWNERS (http code: 404) at (242,7)
found invalid url: https://raw.githubusercontent.com/kubernetes/client-go/master/scale/OWNERS (http code: 404) at (614,7)
found invalid url: https://raw.githubusercontent.com/kubernetes/kubernetes/master/pkg/controller/cloud/OWNERS (http code: 404) at (824,7)
found invalid url: https://raw.githubusercontent.com/kubernetes/noderesourcetopology-api/master/OWNERS (http code: 404) at (1915,7)
found invalid url: https://raw.githubusercontent.com/kubernetes/kubernetes/master/test/e2e/scalability/OWNERS (http code: 404) at (2101,7)
found invalid url: https://raw.githubusercontent.com/kubernetes-sigs/kube-scheduler-simulator/main/OWNERS (http code: 404) at (2196,7)
found invalid url: https://raw.githubusercontent.com/kubernetes/kubernetes/master/staging/src/k8s.io/csi-api/OWNERS (http code: 404) at (2448,7)
done

@dims
Copy link
Member

dims commented Dec 24, 2021

updated version of tool from @spiffxp shows the same issues

WARNING: sig-api-machinery/server-binaries is missing kubernetes/kubernetes/master/pkg/OWNERS
WARNING: sig-apps/workloads-api is missing kubernetes/kubernetes/pkg/registry/extensions/OWNERS
WARNING: sig-apps/workloads-api is missing kubernetes/kubernetes/staging/src/k8s.io/api/core/v1/OWNERS
WARNING: sig-autoscaling/scale-client is missing kubernetes/client-go/scale/OWNERS
WARNING: sig-cloud-provider/kubernetes-cloud-provider is missing kubernetes/kubernetes/pkg/controller/cloud/OWNERS
WARNING: sig-node/noderesourcetopology-api is missing kubernetes/noderesourcetopology-api/OWNERS
WARNING: sig-scalability/kubernetes-scalability-and-performance-tests-and-validation is missing kubernetes/kubernetes/test/e2e/scalability/OWNERS
WARNING: sig-scheduling/kube-scheduler-simulator is missing https://raw.githubusercontent.com/kubernetes-sigs/kube-scheduler-simulator/main/OWNERS
WARNING: sig-storage/kubernetes-csi is missing kubernetes/kubernetes/staging/src/k8s.io/csi-api/OWNERS

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 23, 2022
@dims
Copy link
Member

dims commented Jan 24, 2022

/remove-lifecycle stale

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@enj enj added this to SIG Auth Dec 5, 2022
@enj enj moved this to Closed / Done in SIG Auth Dec 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/community-management committee/steering Denotes an issue or PR intended to be handled by the steering committee. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling. sig/contributor-experience Categorizes an issue or PR as relevant to SIG Contributor Experience. sig/storage Categorizes an issue or PR as relevant to SIG Storage.
Projects
Archived in project
Development

No branches or pull requests