Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Manifests: Rename metadata gRPC server's resources to metadata-grpc-* #3108

Merged
merged 3 commits into from
Mar 27, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 4 additions & 2 deletions backend/metadata_writer/src/metadata_helpers.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,10 @@


def connect_to_mlmd() -> metadata_store.MetadataStore:
metadata_service_host = os.environ.get('METADATA_SERVICE_SERVICE_HOST', 'metadata-service')
metadata_service_port = int(os.environ.get('METADATA_SERVICE_SERVICE_PORT', 8080))
metadata_service_host = os.environ.get(
'METADATA_GRPC_SERVICE_SERVICE_HOST', 'metadata-grpc-service')
metadata_service_port = int(os.environ.get(
'METADATA_GRPC_SERVICE_SERVICE_PORT', 8080))

mlmd_connection_config = metadata_store_pb2.MetadataStoreClientConfig(
host=metadata_service_host,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,33 +4,33 @@ metadata:
labels:
app: metadata
app.kubernetes.io/name: {{ .Release.Name }}
name: metadata-service
name: metadata-grpc-service
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason on why prefer renaming it?

Is it more for understand the meaning? I agree with you that it should be named like this PR. Background: rename deployment in manifests is treated as not-back-compatibile in GCP Marketplace. It means we need to bump version. If just to make the name be more understandable, I may prefer you hold this CL or ~1-2 weeks so that we put it to KFP 0.4.x

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The purpose is to be consistent with kubeflow deployment.

We've released 0.3.0, even if we hold this, next release won't be backward compatible. So I think there is no reason to hold it.

spec:
ports:
- name: md-backendapi
port: 8080
protocol: TCP
selector:
component: metadata-server
component: metadata-grpc-server
app.kubernetes.io/name: {{ .Release.Name }}
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
component: metadata-server
component: metadata-grpc-server
app.kubernetes.io/name: {{ .Release.Name }}
name: metadata-deployment
name: metadata-grpc-deployment
spec:
replicas: 1
selector:
matchLabels:
component: metadata-server
component: metadata-grpc-server
app.kubernetes.io/name: {{ .Release.Name }}
template:
metadata:
labels:
component: metadata-server
component: metadata-grpc-server
app.kubernetes.io/name: {{ .Release.Name }}
spec:
containers:
Expand Down Expand Up @@ -163,9 +163,9 @@ kind: ConfigMap
metadata:
name: metadata-grpc-configmap
labels:
component: metadata-server
component: metadata-grpc-server
data:
METADATA_GRPC_SERVICE_HOST: "metadata-service"
METADATA_GRPC_SERVICE_HOST: "metadata-grpc-service"
METADATA_GRPC_SERVICE_PORT: "8080"
---
apiVersion: apps/v1
Expand Down
18 changes: 18 additions & 0 deletions manifests/kustomize/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,24 @@ kubectl kustomize base/crds | kubectl delete -f -

```

## Upgrade
Note - Do **NOT** follow these instructions if you are upgrading KFP in a
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you include a link to https://www.kubeflow.org/docs/pipelines/installation/overview/ for background information of what a "proper kubeflow installation" means?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd suggest including "when upgrading from version < xxx, you need to do this". The exact version is still unknown now, but you can put a placeholder there. We will follow up before the release.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the proper kubeflow installation, you mean the getting-started link instead, right?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I pushed the change with the getting-started link.
Please confirm that it should be this link.
I can change it if you feel it's better to point to the pipelines page, but then the message should be slightly different, since that link contains information for both types of installation

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, sounds good to me

[proper Kubeflow installation](https://www.kubeflow.org/docs/started/getting-started/).

If you have already deployed a standalone KFP installation of version prior to
0.2.5 and you want to upgrade it, make sure the following resources do not
exist: `metadata-deployment`, `metadata-service`.
```
kubectl -n <KFP_NAMESPACE> get deployments | grep metadata-deployment
kubectl -n <KFP_NAMESPACE> get service | grep metadata-service
```

If they exist, you can delete them by running the following commands:
```
kubectl -n <KFP_NAMESPACE> delete deployment metadata-deployment
kubectl -n <KFP_NAMESPACE> delete service metadata-service
```

## Troubleshooting

### Permission error installing Kubeflow Pipelines to a cluster
Expand Down
4 changes: 2 additions & 2 deletions manifests/kustomize/base/metadata/kustomization.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@ kind: Kustomization
resources:
- metadata-configmap.yaml
- metadata-mysql-secret.yaml
- metadata-deployment.yaml
- metadata-service.yaml
- metadata-grpc-deployment.yaml
- metadata-grpc-service.yaml
- metadata-envoy-deployment.yaml
- metadata-envoy-service.yaml
- metadata-writer-deployment.yaml
Expand Down
4 changes: 2 additions & 2 deletions manifests/kustomize/base/metadata/metadata-configmap.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ kind: ConfigMap
metadata:
name: metadata-grpc-configmap
labels:
component: metadata-server
component: metadata-grpc-server
data:
METADATA_GRPC_SERVICE_HOST: "metadata-service"
METADATA_GRPC_SERVICE_HOST: "metadata-grpc-service"
METADATA_GRPC_SERVICE_PORT: "8080"
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: metadata-envoy
name: metadata-envoy-deployment
labels:
component: metadata-envoy
spec:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ kind: Service
apiVersion: v1
metadata:
labels:
app: metadata
app: metadata-envoy
name: metadata-envoy-service
spec:
selector:
Expand Down
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: metadata-deployment
name: metadata-grpc-deployment
labels:
component: metadata-server
component: metadata-grpc-server
spec:
replicas: 1
selector:
matchLabels:
component: metadata-server
component: metadata-grpc-server
template:
metadata:
labels:
component: metadata-server
component: metadata-grpc-server
spec:
containers:
- name: container
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,10 @@ apiVersion: v1
metadata:
labels:
app: metadata
name: metadata-service
name: metadata-grpc-service
spec:
selector:
component: metadata-server
component: metadata-grpc-server
type: ClusterIP
ports:
- port: 8080
Expand Down