Skip to content

Add docs for using the Kubernetes monitor on self-hosted instances #2755

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

eddymoulton
Copy link
Contributor

@eddymoulton eddymoulton commented Jun 26, 2025

Updates to installation and Kubernetes Live Object Status in preparation for it's supported release for self hosted instances.

To be merged closer to the release date for Octopus 2025.3

Internal ticket

@eddymoulton eddymoulton marked this pull request as ready for review July 31, 2025 23:30
Copy link
Contributor

@octonautcal octonautcal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: some minor capitalisation for Kubernetes to remain consistent and spelling error updates. (I assume we go with Australian style english here at Octopus instead of US?)

Copy link
Contributor

@liam-mackie liam-mackie left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM with minor nits :)

@@ -148,7 +148,7 @@ spec:

Unlike the Octopus Web Portal, Polling Tentacles must be able to connect to each Octopus node individually to pick up new tasks. Our Octopus HA cluster assumes two nodes, therefore a load balancer is required for each node to allow direct access.

The following YAML creates load balancers with separate public IPs for each node. They direct web traffic to each node on port `80` and Polling Tentacle traffic on port `10943`.
The following YAML creates load balancers with separate public IPs for each node. They direct web traffic to each node on port `80`, Polling Tentacle traffic on port `10943` and gRPC traffic on port `8443`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: I'm an oxford comma stan - feel free to discard if it's against our style guides.

Suggested change
The following YAML creates load balancers with separate public IPs for each node. They direct web traffic to each node on port `80`, Polling Tentacle traffic on port `10943` and gRPC traffic on port `8443`.
The following YAML creates load balancers with separate public IPs for each node. They direct web traffic to each node on port `80`, Polling Tentacle traffic on port `10943`, and gRPC traffic on port `8443`.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@@ -218,7 +226,7 @@ Most of the YAML in this guide can be used with any Kubernetes provider. However
To find out more about storage classes, refer to the [Kubernetes Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/) documentation.
:::

Whilst it is possible to mount external storage by manually defining Persistent Volume definitions in YAML, Cloud providers offering Kubernetes managed services typically include the option to dynamically provision file storage based on persistent volume claim definitions.
While it is possible to mount external storage by manually defining Persistent Volume definitions in YAML, Cloud providers offering Kubernetes managed services typically include the option to dynamically provision file storage based on persistent volume claim definitions.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: Consider changing to be less formal, since you're also changing from whilst.

Suggested change
While it is possible to mount external storage by manually defining Persistent Volume definitions in YAML, Cloud providers offering Kubernetes managed services typically include the option to dynamically provision file storage based on persistent volume claim definitions.
While it's possible to mount external storage by manually defining Persistent Volume definitions in YAML, Cloud providers offering Kubernetes managed services typically include the option to dynamically provision file storage based on persistent volume claim definitions.

@@ -17,6 +17,12 @@ This page will help you diagnose and solve issues with Kubernetes Live Object St

Some firewalls may prevent the applications from making outbound connections over non-standard ports. If this is preventing the Kubernetes monitor from connecting to your Octopus Server, configure your environment to allow outbound connections.

For customers running a self-hosted instance, ensure that Octopus Server's `grpcListenPort` parameter is configured to be 8443, or that the Kubernetes monitors `server-grpc-url` parameter has been updated to match.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit monitor should be a possessive noun, and it's better to be more explicit stating that the url parameter has to be updated to match.

Suggested change
For customers running a self-hosted instance, ensure that Octopus Server's `grpcListenPort` parameter is configured to be 8443, or that the Kubernetes monitors `server-grpc-url` parameter has been updated to match.
For customers running a self-hosted instance, ensure that Octopus Server's `grpcListenPort` parameter is configured to be 8443. If using a port other than 8443, ensure the Kubernetes monitor's `server-grpc-url` parameter has been updated to match.

@liam-mackie
Copy link
Contributor

I assume we go with Australian style english here at Octopus instead of US?
FYI @octonautcal we use US English

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants