-
Notifications
You must be signed in to change notification settings - Fork 99
Add docs for using the Kubernetes monitor on self-hosted instances #2755
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add docs for using the Kubernetes monitor on self-hosted instances #2755
Conversation
…-on-self-hosted-instances
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: some minor capitalisation for Kubernetes to remain consistent and spelling error updates. (I assume we go with Australian style english here at Octopus instead of US?)
src/pages/docs/kubernetes/targets/kubernetes-agent/ha-cluster-support.md
Outdated
Show resolved
Hide resolved
src/pages/docs/kubernetes/live-object-status/troubleshooting/index.md
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM with minor nits :)
@@ -148,7 +148,7 @@ spec: | |||
|
|||
Unlike the Octopus Web Portal, Polling Tentacles must be able to connect to each Octopus node individually to pick up new tasks. Our Octopus HA cluster assumes two nodes, therefore a load balancer is required for each node to allow direct access. | |||
|
|||
The following YAML creates load balancers with separate public IPs for each node. They direct web traffic to each node on port `80` and Polling Tentacle traffic on port `10943`. | |||
The following YAML creates load balancers with separate public IPs for each node. They direct web traffic to each node on port `80`, Polling Tentacle traffic on port `10943` and gRPC traffic on port `8443`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: I'm an oxford comma stan - feel free to discard if it's against our style guides.
The following YAML creates load balancers with separate public IPs for each node. They direct web traffic to each node on port `80`, Polling Tentacle traffic on port `10943` and gRPC traffic on port `8443`. | |
The following YAML creates load balancers with separate public IPs for each node. They direct web traffic to each node on port `80`, Polling Tentacle traffic on port `10943`, and gRPC traffic on port `8443`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I knew I was on the right side of the fence: https://www.octopus.design/latest/brand/writing/grammar-rules-8WpUimhK-8WpUimhK#section-the-oxford-comma-7b
@@ -218,7 +226,7 @@ Most of the YAML in this guide can be used with any Kubernetes provider. However | |||
To find out more about storage classes, refer to the [Kubernetes Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/) documentation. | |||
::: | |||
|
|||
Whilst it is possible to mount external storage by manually defining Persistent Volume definitions in YAML, Cloud providers offering Kubernetes managed services typically include the option to dynamically provision file storage based on persistent volume claim definitions. | |||
While it is possible to mount external storage by manually defining Persistent Volume definitions in YAML, Cloud providers offering Kubernetes managed services typically include the option to dynamically provision file storage based on persistent volume claim definitions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Consider changing to be less formal, since you're also changing from whilst.
While it is possible to mount external storage by manually defining Persistent Volume definitions in YAML, Cloud providers offering Kubernetes managed services typically include the option to dynamically provision file storage based on persistent volume claim definitions. | |
While it's possible to mount external storage by manually defining Persistent Volume definitions in YAML, Cloud providers offering Kubernetes managed services typically include the option to dynamically provision file storage based on persistent volume claim definitions. |
@@ -17,6 +17,12 @@ This page will help you diagnose and solve issues with Kubernetes Live Object St | |||
|
|||
Some firewalls may prevent the applications from making outbound connections over non-standard ports. If this is preventing the Kubernetes monitor from connecting to your Octopus Server, configure your environment to allow outbound connections. | |||
|
|||
For customers running a self-hosted instance, ensure that Octopus Server's `grpcListenPort` parameter is configured to be 8443, or that the Kubernetes monitors `server-grpc-url` parameter has been updated to match. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit monitor should be a possessive noun, and it's better to be more explicit stating that the url parameter has to be updated to match.
For customers running a self-hosted instance, ensure that Octopus Server's `grpcListenPort` parameter is configured to be 8443, or that the Kubernetes monitors `server-grpc-url` parameter has been updated to match. | |
For customers running a self-hosted instance, ensure that Octopus Server's `grpcListenPort` parameter is configured to be 8443. If using a port other than 8443, ensure the Kubernetes monitor's `server-grpc-url` parameter has been updated to match. |
|
Updates to installation and Kubernetes Live Object Status in preparation for it's supported release for self hosted instances.
To be merged closer to the release date for Octopus 2025.3
Internal ticket