Skip to content
This repository was archived by the owner on Feb 22, 2022. It is now read-only.

adds chart for kafka #144

Merged
merged 7 commits into from
Feb 3, 2017
Merged

adds chart for kafka #144

merged 7 commits into from
Feb 3, 2017

Conversation

faraazkhan
Copy link
Contributor

@k8s-ci-robot
Copy link
Contributor

Can a kubernetes member verify that this patch is reasonable to test? If so, please reply with "@k8s-bot ok to test" on its own line.

Regular contributors should join the org to skip this step.

@prydonius
Copy link
Member

@k8s-bot ok to test

@prydonius
Copy link
Member

@faraazkhan can you sign the Linux Foundation CLA?

@faraazkhan
Copy link
Contributor Author

faraazkhan commented Oct 22, 2016

@prydonius I am trying. Added my github account and everything, it says, I am supposed to get an email for with the CLA but nothing so far.

@faraazkhan
Copy link
Contributor Author

faraazkhan commented Oct 22, 2016

Also looks like we are running an old version of helm in Jenkins right now. I1021 19:04:55.614] [ERROR] templates/kafka-petset.yaml: namespace option is currently NOT supported doesnt seem right 😄

Update: Verified Jenkins is running a newer version Client: &version.Version{SemVer:"v2.0.0-beta.1", GitCommit:"1b792f990675702488899b94352168572c54c972", GitTreeState:"clean"} . I have tested this against Client: &version.Version{SemVer:"v2.0.0-alpha.5", GitCommit:"a324146945c01a1e2dd7eaf23caf0e55fabfd3d2", GitTreeState:"clean"}

Not sure why the error then 😞

@prydonius
Copy link
Member

@faraazkhan Thanks for signing the CLAs! Namespace shouldn't be set in the manifests, since Helm manages which namespace a Chart's resources gets installed into using the helm install --namespace flag.

home: https://kafka.apache.org/
sources:
- https://github.com/kubernetes/charts/tree/master/incubator/zookeeper
- https://github.com/Yolean/kubernetes-kafka
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you add https://github.com/solsson/dockerfiles/tree/master/kafka

@@ -0,0 +1,10 @@
name: zookeeper
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Helm now supports specifying dependencies using a requirements.yaml file. Can you update this to remove this vendored version of zookeeper, and instead just add a requirements.yaml that can be used to bring in the dependency?

See https://github.com/kubernetes/helm/blob/master/docs/charts.md#managing-dependencies-with-requirementsyaml and https://github.com/kubernetes/charts/tree/master/incubator/patroni for an example.

global:
Namespace: "kafka"
ZooKeeperServiceName: "zookeeper"

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you remove these trailing lines?

@faraazkhan
Copy link
Contributor Author

@prydonius Jenkins is currently failing with W1023 20:38:31.781] Error: deletion error count 1: no objects visited. I have seen this error, along with at least one more in the latest version of helm. This doesn't necessarily seem like an issue with the chart, but more with helm?

@faraazkhan
Copy link
Contributor Author

@k8s-bot e2e test this

Copy link
Contributor

@chrislovecnm chrislovecnm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Multiple tweaks ... looks good


## Pre Requisites:

* Kubernetes 1.3 with alpha APIs enable
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are also using storage classes as well. You probably should mention both of those.


### Connecting to Kafka

You can connect to Kafka by running a simple pod in the K8s cluster like this with a configuration like this:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may want to add a notes.txt file with these instructions.

- containerPort: 9092
name: kafka
resources:
requests:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need a limits section in here as well.

mountPath: /tmp/zookeeper
volumes:
- name: workdir
emptyDir: {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dumb question but does this need persitence?

- metadata:
name: datadir
annotations:
volume.alpha.kubernetes.io/storage-class: anything
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we have this setup to do both a PVC or an empty dir? Looking for an example ...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the empty dir can be removed. Want to leave the PVC in. I will push that out shortly.

pod.alpha.kubernetes.io/initialized: "true"
pod.alpha.kubernetes.io/init-containers: '[
]'
spec:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still needs a ready probe. We can move into incubator without.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could just add the same test for readiness as well. Not sure how valuable that would be though, in Kafka's situation.

@chrislovecnm
Copy link
Contributor

@viglesiasce any ideas on the build failure "W1023 20:38:31.781] Error: deletion error count 1: no objects visited", do we need to bump the helm version we are using?

*.bak
*.tmp
*~
# Various IDEs
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it best practice to add proprietary file types to a global .ignore file? Shouldn't the user be managing these?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kris-nova this is autogenned by helm.

Adds liveness probe
Adds resource limits
Improves README.md
@faraazkhan
Copy link
Contributor Author

faraazkhan commented Oct 30, 2016

Ok. I have addressed the comments here . But still getting the Error: deletion error count 1: no objects visited on Jenkins. I also see this happen locally when I do a helm delete --purge my-release. It seems to be raised here, will try to dig into it further later, in the meanwhile any help is appreciated. @prydonius @viglesiasce @chrislovecnm

Update: Please note, despite the error all the resources seem to be deleted fine and I am able to reuse the helm release name even in the same namespace with no errors (so purge worked just fine)

@chrislovecnm
Copy link
Contributor

@k8s-bot e2e test this

Copy link
Contributor

@chrislovecnm chrislovecnm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still, have storage class issue, and readyprobe not defined, and we can merge this into incubator with these open items.

@viglesiasce any ideas about the build failures?

Thanks for the chart @faraazkhan!!!

pod.alpha.kubernetes.io/initialized: "true"
pod.alpha.kubernetes.io/init-containers: '[
]'
spec:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still needs a ready probe. We can move into incubator without.

@viglesiasce
Copy link
Contributor

Sorry for the delay @faraazkhan! Will look into the failures today.

@technosophos
Copy link
Member

With that deletion error, can you still see the resources in Kubernetes that were supposed to be deleted? That error message usually indicates that the deletion logic went to remove a resource from Kubernetes, and the resource was missing.

@viglesiasce
Copy link
Contributor

One thing I noticed is that after deploy the kf and kf-broker services end up with no endpoints.

@faraazkhan is that expected?

@technosophos could that be an issue?

NAME                        ENDPOINTS                                                        AGE
ep/demo-app-promchart       10.140.5.17:8080,10.140.5.18:8080,10.140.5.25:8080 + 2 more...   1d
ep/demo-grafana             10.140.5.21:3000                                                 1d
ep/kubernetes               104.199.112.125:443                                              16d
ep/plinking-racoo-alerts    10.140.5.6:9093                                                  5d
ep/plinking-racoo-server    10.140.5.7:9090                                                  5d
ep/steely-panda-nginx       10.140.5.23:80                                                   21h
ep/voting-goose-kf          <none>                                                           5m
ep/voting-goose-kf-broker   <none>                                                           5m
ep/voting-goose-zk          10.140.1.11:2888,10.140.1.13:2888,10.140.1.14:2888 + 3 more...   5m
NAME                         CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
svc/demo-app-promchart       10.143.244.167   <none>        80/TCP              1d
svc/demo-grafana             10.143.246.48    <none>        80/TCP              1d
svc/kubernetes               10.143.240.1     <none>        443/TCP             16d
svc/plinking-racoo-alerts    10.143.245.26    <none>        9093/TCP            5d
svc/plinking-racoo-server    10.143.244.45    <none>        80/TCP              5d
svc/steely-panda-nginx       10.143.242.164   <none>        80/TCP              21h
svc/voting-goose-kf          10.143.243.96    <none>        9092/TCP            5m
svc/voting-goose-kf-broker   None             <none>        9092/TCP            5m
svc/voting-goose-zk          None             <none>        2888/TCP,3888/TCP   5m

@chrislovecnm
Copy link
Contributor

chrislovecnm commented Nov 3, 2016

Houston that is a problem with the services ;) @faraazkhan did you monkey your selectors or @viglesiasce do yah think this is helm?

@faraazkhan
Copy link
Contributor Author

faraazkhan commented Nov 10, 2016

@chrislovecnm @viglesiasce @technosophos sorry about the delay in responding, so I just pushed a few changes, that ensure that all the services created have endpoints. I see what @technosophos is saying but not sure what that resource could be, everything created in the helm chart seems to be destroyed correctly:

nov-09-2016 20-38-24

@viglesiasce
Copy link
Contributor

Thanks @faraazkhan! Things are looking better wrt services. Can you add a NOTES file that explains next steps to the users.

@faraazkhan
Copy link
Contributor Author

@viglesiasce Can you please be specific, how can I improve this? Maybe I am not following a convention notes vs NOTES https://github.com/kubernetes/charts/pull/144/files#diff-2654b4aa4eacf071f2e90debaad8b050R1

@chrislovecnm
Copy link
Contributor

NOTES.txt file. Look at the charts that are not in incubator. They should have examples

@@ -0,0 +1,36 @@
You can connect to Kafka by running a simple pod in the K8s cluster like this with a configuration like this:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Helm will specifically look for a while called NOTES.txt, render it as a template, and print it out after installing/upgrading or running helm status on the release. This will allow you to print out working commands with the name of the release! See https://github.com/kubernetes/charts/blob/master/incubator/patroni/templates/NOTES.txt for an example :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:thumbsup

Copy link
Contributor

@chrislovecnm chrislovecnm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is also and issue

oot@ip-172-20-82-63:/var/log# kubectl -n kafka describe svc
Name:           exacerbated-ar-kafka
Namespace:      kafka
Labels:         chart=kafka-0.1.0
            component=exacerbated-ar-kafka
            heritage=Tiller
            release=exacerbated-ar
Selector:       app=kafka,component=exacerbated-ar-kafka
Type:           ClusterIP
IP:         100.69.243.160
Port:           <unset> 9092/TCP
Endpoints:      <none>
Session Affinity:   None
No events.

Name:           exacerbated-ar-kafka-bro
Namespace:      kafka
Labels:         chart=kafka-0.1.0
            component=exacerbated-ar-kafka
            heritage=Tiller
            release=exacerbated-ar
Selector:       app=kafka,component=exacerbated-ar-kafka
Type:           ClusterIP
IP:         None
Port:           <unset> 9092/TCP
Endpoints:      <none>
Session Affinity:   None
No events.

Name:           exacerbated-ar-zk
Namespace:      kafka
Labels:         app=zk
            chart=zookeeper-0.1.1
            component=exacerbated-ar-zk
            heritage=Tiller
            release=exacerbated-ar
Selector:       app=zk,component=exacerbated-ar-zk
Type:           ClusterIP
IP:         None
Port:           client  2181/TCP
Endpoints:      100.96.3.3:2181,100.96.4.5:2181,100.96.5.2:2181
Port:           peer    2888/TCP
Endpoints:      100.96.3.3:2888,100.96.4.5:2888,100.96.5.2:2888
Port:           leader-election 3888/TCP
Endpoints:      100.96.3.3:3888,100.96.4.5:3888,100.96.5.2:3888
Session Affinity:   None
No events.

Name:           zookeeper
Namespace:      kafka
Labels:         <none>
Selector:       app=zk
Type:           ClusterIP
IP:         100.65.108.22
Port:           client  2181/TCP
Endpoints:      100.96.3.3:2181,100.96.4.5:2181,100.96.5.2:2181
Session Affinity:   None

You have two services that do not have endpoints. I am uncertain what the purpose of the services are.


## Known Limitations

* Namespace creation is not automated
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an easy fix. Will need to happen for stable. This is may be a reason why unit tests are failing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@faraazkhan any word on what these services are for?

@viglesiasce
Copy link
Contributor

@k8s-bot ok to test

@k8s-ci-robot
Copy link
Contributor

Jenkins Charts e2e failed for commit ca656b5. Full PR test history.

The magic incantation to run this job again is @k8s-bot e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

Copy link
Contributor

@viglesiasce viglesiasce left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Move notes.txt to templates/NOTES.txt
  2. Update NOTES.txt to be templatized
  3. Remove dependencies in notes on a particular namespace

kind: Pod
metadata:
name: testclient
namespace: kafka
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldnt set the namespace explicitly here. Users may not have configured this namespace for installation.

You can make this a kubectl run command and avoid having the pod yaml here. Look at the postgresql chart as an example.

kubectl run client -it --rm --image solsson/kafka:0.10.0.1 --command -- bash

To stop the listener session above press: Ctrl+C

To start an interactive message producer session:
kubectl -n kafka exec -ti testclient -- ./bin/kafka-console-producer.sh --broker-list my-release-broker-kf.kafka.svc.cluster.local:9092 --topic test1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can simplify the dns name there and make it --broker-list {{ .Release.Name }}-kf:9092. Again this make the command work across namespaces.

@F21
Copy link

F21 commented Dec 19, 2016

Any update here?

@prydonius
Copy link
Member

@faraazkhan any updates here?

@viglesiasce viglesiasce merged commit 00e7aea into helm:master Feb 3, 2017
solsson added a commit to Yolean/kubernetes-kafka that referenced this pull request Jun 27, 2017
which might not matter because we no longer have a loadbalancing service.

These probes won't catch all failure modes,
but if they fail we're pretty sure the container is malfunctioning.

I found some sources recommending ./bin/kafka-topics.sh for probes
but to me it looks risky to introduce a dependency to some other service for such things.
One such source is helm/charts#144

The zookeeper probe is from
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
An issue is that zookeeper's logs are quite verbose for every probe.
solsson added a commit to solsson/dockerfiles that referenced this pull request Jul 26, 2017
helm/charts#144

Beware: This image is 841 MB, compared to 301 for the current :latest.
solsson added a commit to solsson/dockerfiles that referenced this pull request Jul 26, 2017
helm/charts#144

Beware: This image is 841 MB, compared to 301 for the current :latest.
See README.md for upgrade instructions.
jordanjennings pushed a commit to jordanjennings/charts that referenced this pull request Nov 27, 2018
…-papertrail-system

[T2A-143] fix oos-detection papertrail system name
shubham14bajpai pushed a commit to shubham14bajpai/charts that referenced this pull request Sep 6, 2021
[openebs]chore(release): add OpenEBS 2.0.0 YAML for AWS
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants