Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Skip backend integration tests when cli flag isn't passed #527

Merged
merged 4 commits into from
Dec 17, 2018

Conversation

yebrahim
Copy link
Contributor

@yebrahim yebrahim commented Dec 12, 2018

Backend unit tests now run by default if you run go test ./.... If you want integration tests to run as well, set the unitTestsOnly flag to false: $ go test ./... -args -unitTestsOnly=false.

/area back-end
/assign @neuromage


This change is Reviewable

@yebrahim
Copy link
Contributor Author

@IronPan @neuromage does it make sense to run backend unit tests in Travis rather than Prow? Why wait for 17+ minutes to find out the result of these tests when you can wait for 3-4 minutes?

@neuromage
Copy link
Contributor

/lgtm

@vicaire
Copy link
Contributor

vicaire commented Dec 13, 2018 via email

@IronPan
Copy link
Member

IronPan commented Dec 14, 2018

the unit tests are not taking long to finish and show up the result in github. but feel free to move them to travis if you want.

@IronPan
Copy link
Member

IronPan commented Dec 14, 2018

/lgtm

@yebrahim
Copy link
Contributor Author

@IronPan this needs a /approve as well.
When I looked, it was taking a long time for the unit tests step to run, it seems like it waits for the build image step?

@yebrahim yebrahim changed the title [WIP] Skip backend integration tests when cli flag isn't passed Skip backend integration tests when cli flag isn't passed Dec 14, 2018
@IronPan
Copy link
Member

IronPan commented Dec 15, 2018

/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: IronPan

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

1 similar comment
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: IronPan

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@IronPan
Copy link
Member

IronPan commented Dec 15, 2018

The unit tests and build image are parallel. UT won't depend on the images being built

@k8s-ci-robot k8s-ci-robot removed the lgtm label Dec 15, 2018
@yebrahim
Copy link
Contributor Author

Need one more lgtm here after resolving conflicts with master.

@neuromage
Copy link
Contributor

/lgtm

@k8s-ci-robot k8s-ci-robot merged commit ba261f3 into kubeflow:master Dec 17, 2018
@yebrahim yebrahim deleted the be-unit-tests branch December 18, 2018 18:03
Linchin pushed a commit to Linchin/pipelines that referenced this pull request Apr 11, 2023
* Numerous fixes and improvements to bulk deploying Kubeflow with v06

Fix bug with getting credentials in bulk_deploy

* Need to support obtaining credentials to talk to K8s API servier using
  the pods service accounnt by calling load incluster config.

* With v06 deployments we need to handle the case where the DM API might
  not be enabled yet and enable it before checking if KF exists.

* Don't use workload identity because with workload identity we are seeing
  performance problems when issuing a lot of requests that hit the gke metadata
  server

Improve the ability to continue deploying v06 deployments that failed.

* With v06 applications check if the ingress actually exists and if
  not we will rerun deploy.

  * Delete istio-ingressgateway service if its not fully deployed to try to recover.

* Add a util method for getting Kubernetes credentials

  * When running in a pod there are two cases we need to handle
    * using the KSA to authenticate to the K8s APIServer the pod is running
    * Using a kubeconfig file to talk to a different cluster.

* Add resource requests to the create_unique_kf_instance jobs to better
  handle scheduling of many jobs

* Create code to delete Kubeflow deployments in bulk

* Don't use a CSV file to provide info about what deployments to create
  * Uploading CSV files to GCS turns out to be a lot of friction

  * instead just take command line arguments which specify the range of
    projects to iterate over.

* Fix lint.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants