Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added 'openapi' generated resources #819

Merged

Conversation

jpkrohling
Copy link
Contributor

@jpkrohling jpkrohling commented Dec 6, 2019

Resolves #763
Signed-off-by: Juraci Paixão Kröhling juraci@kroehling.de

@jpkrohling
Copy link
Contributor Author

Looks like the OpenShift tests need to run on a newer OpenShift version. Here's the output of the tests against a production-quality OpenShift 4.2 cluster:

$ make e2e-tests-self-provisioned-es
Formatting code...
Building...
STEP 1: FROM centos
STEP 2: ENV OPERATOR=/usr/local/bin/jaeger-operator     USER_UID=1001     USER_NAME=jaeger-operator
--> Using cache d39e772001437881e4b3772eaa10b72940b229a74f92fdb707f57c1bac560c98
STEP 3: RUN INSTALL_PKGS="       openssl       " &&     yum install -y $INSTALL_PKGS &&     rpm -V $INSTALL_PKGS &&     yum clean all &&     mkdir /tmp/_working_dir &&     chmod og+w /tmp/_working_dir
--> Using cache ac407c0c4359977a710993586ca80428b405893b92f760a15957db89a41df681
STEP 4: COPY scripts/* /scripts/
49dc09817b93fd19c934731a4a5b0c75da562d18da7b8e5f975a3fd69c07fde7
STEP 5: COPY build/_output/bin/jaeger-operator ${OPERATOR}
65d61c6e16a0316c8dab4e089670fa7a7daacfa1fb038268a2db58001ff9f8b3
STEP 6: COPY build/bin /usr/local/bin
f934cb369473f88fe206fab6944c44306e2278ac35735c5bddd73c3586466dd6
STEP 7: RUN  /usr/local/bin/user_setup
+ mkdir -p /root
+ chown 1001:0 /root
+ chmod ug+rwx /root
+ chmod g+rw /etc/passwd
+ rm /usr/local/bin/user_setup
a7d0c76f330202f04f1de3f23abefd19a387145a8756b8d4ebc5db5ee138f745
STEP 8: ENTRYPOINT ["/usr/local/bin/entrypoint"]
b1dcf9886c43e7be2d629b449e781bf001ad3ef354536af9ae68aa7355f641f1
STEP 9: USER ${USER_UID}
STEP 10: COMMIT jpkroehling/jaeger-operator:latest
8e3923c67e3d0c27cbfcc86a72d91e2d6a5f1257a86be51aad700353e70e59d0
Pushing image jpkroehling/jaeger-operator:latest...
Getting image source signatures
Copying blob 9cc54db7be65 done
Copying blob 20a6d96fccf8 done
Copying blob c7b8cef4faba done
Copying blob 52704bfa8857 done
Copying blob 9e607bb861a7 skipped: already exists
Copying blob 12350ed965ac skipped: already exists
Copying config 8e3923c67e done
Writing manifest to image destination
Storing signatures
# Elasticsearch requires labeled nodes. These labels are by default present in OCP 4.2
node/ip-10-0-132-58.ec2.internal not labeled
node/ip-10-0-140-229.ec2.internal not labeled
node/ip-10-0-147-172.ec2.internal not labeled
node/ip-10-0-154-227.ec2.internal not labeled
node/ip-10-0-174-104.ec2.internal not labeled
node/ip-10-0-174-186.ec2.internal not labeled
# This is not required in OCP 4.1. The node tuning operator configures the property automatically
# when label tuned.openshift.io/elasticsearch=true label is present on the ES pod. The label
# is configured by ES operator.
namespace/openshift-logging created
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com configured
serviceaccount/elasticsearch-operator created
clusterrole.rbac.authorization.k8s.io/elasticsearch-operator created
clusterrolebinding.rbac.authorization.k8s.io/elasticsearch-operator-rolebinding created
customresourcedefinition.apiextensions.k8s.io/elasticsearches.logging.openshift.io created
deployment.apps/elasticsearch-operator created
deployment.extensions/elasticsearch-operator image updated
Running Self provisioned Elasticsearch end-to-end tests...
ok  	github.com/jaegertracing/jaeger-operator/test/e2e	168.584s

OpenShift version:

$ oc version
Client Version: openshift-clients-4.2.1-201910220950
Server Version: 4.2.2
Kubernetes Version: v1.14.6+868bc38

cc @kevinearls

@jpkrohling jpkrohling changed the title Added 'openapi' generated resources WIP - Added 'openapi' generated resources Dec 12, 2019
@jpkrohling
Copy link
Contributor Author

I'm marking this as WIP, because we don't want this merged before 1.16.

@jpkrohling jpkrohling changed the title WIP - Added 'openapi' generated resources Added 'openapi' generated resources Dec 17, 2019
Signed-off-by: Juraci Paixão Kröhling <juraci@kroehling.de>
@jpkrohling
Copy link
Contributor Author

Removed WIP, this is ready to be reviewed.

@@ -49,12 +48,12 @@ format:
.PHONY: lint
lint:
@echo Linting...
@${GOPATH}/bin/golint -set_exit_status=1 $(PACKAGES)
@./.ci/lint.sh
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the gopath set in the script? I remember that for format it wasn't

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If golint is in the PATH, then it's used. If none is in the path, then GOPATH is used, if set. If golint still can't be found, the script will fail.

GOLINT=golint

command -v ${GOLINT} > /dev/null
if [ $? != 0 ]; then
    if [ -z ${GOPATH} ]; then
        GOLINT="${GOPATH}/bin/golint"
    fi
fi

@jpkrohling jpkrohling merged commit fcf9d40 into jaegertracing:master Dec 18, 2019
@yeya24
Copy link
Contributor

yeya24 commented Dec 20, 2019

I tried to deploy the CRD using this yaml, but seems the metadata is too long. Could you please take a look? @jpkrohling

kubectl apply -f jaegertracing.io_jaegers_crd.yaml
The CustomResourceDefinition "jaegers.jaegertracing.io" is invalid: metadata.annotations: Too long: must have at most 262144 characters

@jpkrohling
Copy link
Contributor Author

I ran into the same problem when developing and was told that CRDs have to be created/deleted, not applied. When using kubectl apply, the older version is stored in an annotation, so, use kubectl create ... instead.

@yeya24
Copy link
Contributor

yeya24 commented Dec 20, 2019

I ran into the same problem when developing and was told that CRDs have to be created/deleted, not applied. When using kubectl apply, the older version is stored in an annotation, so, use kubectl create ... instead.

Thanks!

@Aisuko
Copy link

Aisuko commented Dec 27, 2019

I hit the same issue when I input kubectl apply -f on CRD. kubectl create useful for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add schema to CRD
4 participants