Skip to content

Commit

Permalink
Update doc (elastic#1319) (elastic#1329)
Browse files Browse the repository at this point in the history
* Update persistent storage section
* Update kibana localhost url to use https
* Update k8s resources names in accessing-services doc
* Mention SSL browser warning
* Fix bulleted list
  • Loading branch information
thbkrkr authored Jul 22, 2019
1 parent 50a9c90 commit c59d455
Show file tree
Hide file tree
Showing 2 changed files with 13 additions and 12 deletions.
17 changes: 9 additions & 8 deletions docs/accessing-services.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ To access Elasticsearch and Kibana, the operator manages a default user named `e

[source,sh]
----
> kubectl get secret hulk-elastic-user -o go-template='{{.data.elastic | base64decode }}'
> kubectl get secret hulk-es-elastic-user -o go-template='{{.data.elastic | base64decode }}'
42xyz42citsale42xyz42
----

Expand Down Expand Up @@ -141,8 +141,9 @@ spec:
You can bring your own certificate to configure TLS to ensure that communication between HTTP clients and the cluster is encrypted.

Create a Kubernetes secret with:
. tls.crt: the certificate (or a chain).
. tls.key: the private key to the first certificate in the certificate chain.

- tls.crt: the certificate (or a chain).
- tls.key: the private key to the first certificate in the certificate chain.

[source,sh]
----
Expand Down Expand Up @@ -178,7 +179,7 @@ NAME=hulk
kubectl get secret "$NAME-ca" -o go-template='{{index .data "ca.pem" | base64decode }}' > ca.pem
PW=$(kubectl get secret "$NAME-elastic-user" -o go-template='{{.data.elastic | base64decode }}')
curl --cacert ca.pem -u elastic:$PW https://$NAME-es:9200/
curl --cacert ca.pem -u elastic:$PW https://$NAME-es-http:9200/
----

*Outside the Kubernetes cluster*
Expand All @@ -191,11 +192,11 @@ curl --cacert ca.pem -u elastic:$PW https://$NAME-es:9200/
----
NAME=hulk
kubectl get secret "$NAME-ca" -o go-template='{{index .data "ca.pem" | base64decode }}' > ca.pem
IP=$(kubectl get svc "$NAME-es" -o jsonpath='{.status.loadBalancer.ingress[].ip}')
PW=$(kubectl get secret "$NAME-elastic-user" -o go-template='{{.data.elastic | base64decode }}')
kubectl get secret "$NAME-es-http-certs-public" -o go-template='{{index .data "tls.crt" | base64decode }}' > tls.crt
IP=$(kubectl get svc "$NAME-es-http" -o jsonpath='{.status.loadBalancer.ingress[].ip}')
PW=$(kubectl get secret "$NAME-es-elastic-user" -o go-template='{{.data.elastic | base64decode }}')
curl --cacert ca.pem -u elastic:$PW https://$IP:9200/
curl --cacert tls.crt -u elastic:$PW https://$IP:9200/
----

Now you should get this message:
Expand Down
8 changes: 4 additions & 4 deletions docs/k8s-quickstart.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -230,7 +230,7 @@ Use `kubectl port-forward` to access Kibana from your local workstation:
kubectl port-forward service/quickstart-kb-http 5601
----
+
Open `http://localhost:5601` in your browser.
Open `https://localhost:5601` in your browser. Your browser will show a warning because the self-signed certificate configured by default is not verified by a third party certificate authority and not trusted by your browser. You can either configure a link:k8s-accessing-elastic-services.html#k8s-setting-up-your-own-certificate[valid certificate] or acknowledge the warning for the purposes of this quick start.
+
Login with the `elastic` user. Retrieve its password with:
+
Expand Down Expand Up @@ -267,11 +267,11 @@ EOF

[float]
[id="{p}-persistent-storage"]
=== Use persistent storage
=== Update persistent storage

Now that you have completed the quickstart, you can try out more features like using persistent storage. The cluster that you deployed in this quickstart uses a default persistent volume claim of 1GiB, without a storage class set. This means that the default storage class defined in the Kubernetes cluster is the one that will be provisioned.
Now that you have completed the quickstart, you can try out more features like tweaking persistent storage. The cluster that you deployed in this quickstart uses a default persistent volume claim of 1GiB, without a storage class set. This means that the default storage class defined in the Kubernetes cluster is the one that will be provisioned.

You can request a `PersistentVolumeClaim` in the cluster specification, to target any `PersistentVolume` class available in your Kubernetes cluster:
You can request a `PersistentVolumeClaim` with a larger size in the Elasticsearch specification or target any `PersistentVolume` class available in your Kubernetes cluster:

[source,yaml]
----
Expand Down

0 comments on commit c59d455

Please sign in to comment.