-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update namespaced install manifest to include inverse proxy #1446
Conversation
/lgtm |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: IronPan The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/test kubeflow-pipeline-e2e-test |
@@ -617,7 +654,7 @@ spec: | |||
- env: | |||
- name: NAMESPACE | |||
value: kubeflow | |||
image: gcr.io/ml-pipeline/persistenceagent:0.1.20 | |||
image: gcr.io/ml-pipeline/persistenceagent:0.1.21 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we extract the version and put it into some central location?
/test kubeflow-pipeline-e2e-test |
@IronPan: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
…flow#1446) * added trainedmodel constants removed unused constants Const TrainedModelMemoryIncluded -> TrainedModelAllocated * added conditional check for enough memory * added MemoryResourceAvailable condition * added function to sum total requested memory given a trained models list * updated trained models resources for parent inference service . * add TrainedModelMemoryIncluded label if there is enough resources in inference service predictor included namespace in listOption removed MemoryResourceAvailable label deletion * changed memory validation function * created IsMemoryResourceAvailable for memory validation for predictors * Corrected constant name TrainedModelMemoryIncluded -> TrainedModelAllocated * added test case for not enough memory resource available
This is generated by running
/assign @Ark-kun
This change is