Skip to content

Add Kubernetes Example Deployment#149

Open
Mr-Philipp wants to merge 1 commit intojkingsman:mainfrom
Mr-Philipp:feature/k8s-deployment
Open

Add Kubernetes Example Deployment#149
Mr-Philipp wants to merge 1 commit intojkingsman:mainfrom
Mr-Philipp:feature/k8s-deployment

Conversation

@Mr-Philipp
Copy link
Copy Markdown

Adds a simple Kubernetes Deployment with persistent storage and an ingress.

@jkingsman
Copy link
Copy Markdown
Owner

Hm, this feels like it's an adaptation of something that is maybe working for you, but I wonder if it's a good example to be setting. I haven't run this yet but some things that stand out to me from a quick glance over lunch:

  • multiple identical commented-out volumes
  • PVC is pinned to a commented out PV which I think iirc just means the pod never comes up
  • isn't RWO sufficient for a db? I'm wondering if I missed something in the yaml but I wouldn't expect anything more than that to be necessary
  • confusing comments around image version pinning?
  • not sure that I love having the venv in the PVC

I'm open to doing k8s but I'd like it to be a pretty shiny example of a good deployment rather than an adaptation of something that mostly-works. What do you think?

@jkingsman jkingsman force-pushed the main branch 2 times, most recently from 1852da2 to 7d5cfde Compare April 8, 2026 05:07
@jkingsman
Copy link
Copy Markdown
Owner

I'm going to close this for now, but feel free to take another pass if you're still interested in introducing this! And apologies for not accepting as-is -- for config examples, it's important to me that they be super readily usable for users and pretty polished/best-practices.

@jkingsman jkingsman closed this Apr 8, 2026
@Mr-Philipp
Copy link
Copy Markdown
Author

sorry, was busy last week and had no time reviewing your comment.

thanks for the feedback, i am trying to answer your points but in general this deployment is the most generic deployment suitable for most of the classic clusters out there but with some commented out examples to fit even if there are some deviations:

  • multiple identical commented-out volumes

yes, that's on me. i am going to remove the second one.

  • PVC is pinned to a commented out PV which I think iirc just means the pod never comes up

that depends on ether the cluster as a StorageProvisioner in use or not. In this case i provided an example for local storage in case there is no provisioner in use.

isn't RWO sufficient for a db? I'm wondering if I missed something in the yaml but I wouldn't expect anything more than that to be necessary

RWO is correct and MUST be used for databases, otherwise (RWM) multiple instances would be able to mount the same volume which would end catastropic ;)

confusing comments around image version pinning?

can you explain what's confusing about the comment? (you mean probably line 99, right?)

not sure that I love having the venv in the PVC

good point. I can remove it from the pvc but then everything in the venv needs to be build after every pod restart. is that okay?

volumeName: local-meshcore
# storageClassName: 'generic' # optional: set storageClass
accessModes:
- ReadWriteMany
Copy link
Copy Markdown
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RWM should be RWO

@jkingsman
Copy link
Copy Markdown
Owner

Yeah mainly around the version confusion.

I can remove it from the pvc but then everything in the venv needs to be build after every pod restart.

the Docker image already has all dependencies installed at build time. uv run does a quick sync on startup (~400ms), but the actual application code and dependencies are baked into the image. Persisting the venv on the PVC means stale dependencies can survive across image upgrades.

I've begun integrating some k3s tests into my exhaustive testing scenarios and this is the deployment I've been testing with, with the kind of comments I'd like to see to help new users get adapted into their env:

k8s.example.yml

@jkingsman jkingsman reopened this Apr 11, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants