Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flake? E2E Test failed with GCP quota exceeded #48688

Closed
fasaxc opened this issue Jul 10, 2017 · 10 comments
Closed

Flake? E2E Test failed with GCP quota exceeded #48688

fasaxc opened this issue Jul 10, 2017 · 10 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. sig/testing Categorizes an issue or PR as relevant to SIG Testing.

Comments

@fasaxc
Copy link
Contributor

fasaxc commented Jul 10, 2017

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:

CI tests for #48469 failed with a quota error:

I0708 13:01:19.525] Creating new network: e2e-38580
W0708 13:02:07.576] ERROR: (gcloud.compute.networks.create) Could not fetch resource:
W0708 13:02:07.577]  - Quota 'SUBNETWORKS' exceeded.  Limit: 100.0

Full logs:

https://storage.googleapis.com/kubernetes-jenkins/pr-logs/pull/48469/pull-kubernetes-kubemark-e2e-gce/38580/build-log.txt

What you expected to happen:

Clean test run.

How to reproduce it (as minimally and precisely as possible):

I don't know wnough about the test infrastructure to know.

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version): Unknown (test infra)
  • Cloud provider or hardware configuration**: GCP
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:
@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jul 10, 2017
@k8s-github-robot
Copy link

@fasaxc There are no sig labels on this issue. Please add a sig label by:
(1) mentioning a sig: @kubernetes/sig-<team-name>-misc
e.g., @kubernetes/sig-api-machinery-* for API Machinery
(2) specifying the label manually: /sig <label>
e.g., /sig scalability for sig/scalability

Note: method (1) will trigger a notification to the team. You can find the team list here and label list here

@k8s-github-robot k8s-github-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jul 10, 2017
@fasaxc
Copy link
Contributor Author

fasaxc commented Jul 10, 2017

@kubernetes/sig-testing-misc

@k8s-ci-robot k8s-ci-robot added the sig/testing Categorizes an issue or PR as relevant to SIG Testing. label Jul 10, 2017
@k8s-github-robot k8s-github-robot removed the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jul 10, 2017
@k8s-ci-robot
Copy link
Contributor

@fasaxc: Reiterating the mentions to trigger a notification:
@kubernetes/sig-testing-misc.

In response to this:

@kubernetes/sig-testing-misc

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@0xmichalis
Copy link
Contributor

/assign @krzyzacy

@0xmichalis
Copy link
Contributor

@krzyzacy IIUC, your new tool, boskos, is destined for fixing these kinds of failures, right?

@krzyzacy
Copy link
Member

no, that should be janitor, and seems something is leaking badly from the test

@krzyzacy
Copy link
Member

I'll check the project, @MrHohn IMO it does not even run test, any suspicious?

@MrHohn
Copy link
Member

MrHohn commented Jul 10, 2017

Quick update: Seems like the auto-network includes more regions than before so we hit quota limit when too many PR jobs are running together.

@krzyzacy
Copy link
Member

sent a quota bump, will need a better design in the future.

@krzyzacy
Copy link
Member

/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. sig/testing Categorizes an issue or PR as relevant to SIG Testing.
Projects
None yet
Development

No branches or pull requests

6 participants