Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hot loading not working when using --mount-inotify with containerd runtime and Kubernetes enabled #910

Closed
1 of 5 tasks
chrismith-equinix opened this issue Nov 29, 2023 · 4 comments · Fixed by #923
Closed
1 of 5 tasks
Labels
bug Something isn't working

Comments

@chrismith-equinix
Copy link

chrismith-equinix commented Nov 29, 2023

Description

Based on the communication at the bottom of this thread, hot loading of Node JS apps should work when the Node JS workload(s) are deployed in the default Kubernetes namespace, and when containerd runtime is being used. This doesn't appear to be working for me.

I installed colima with brew install --HEAD colima about 2-3 weeks ago.

We are running 3 different JS apps, each with hot loading enabled and all working when running outside of colima. Here are the startup commands used for each app:

  1. nodemon app.js
  2. ENV=development webpack serve --color --progress --config webpack.development.js
  3. nest start --watch

There are other apps running as well(i.e. MongoDB, redis and other services), but I have removed from here for brevity. All apps come up and run fine. All volume mounts are working fine as well ... I can kubectl exec into a pod and create a file, and can see it appear on the MacOS host.

Here is a set of the Kubernetes manifests being used to deploy each Node JS app. The same manifests are being used for each of the 3 apps, just different names/labels etc. are used.

Persistent Volume

apiVersion: v1
kind: PersistentVolume
metadata:
  name: app1-pv
  labels:
    app: app1
spec:
  storageClassName: my-storage-class
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /apps/node-js-app1
    type: Directory
  persistentVolumeReclaimPolicy: Delete

Persistent Volume Claim

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: app1-pvc
  namespace: default
  labels:
    app: app1
spec:
  volumeName: app1-pv
  accessModes:
    - ReadWriteOnce
  storageClassName: my-storage-class
  resources:
    requests:
      storage: 1Gi

Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app1
  namespace: default
  labels:
    app: app1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app1
  template:
    metadata:
      labels:
        app: app1
    spec:
      containers:
      - env:
        - name: KUBERNETES_CLUSTER_DOMAIN
          value: cluster.local
        image: my-registry/app1:localdev
        imagePullPolicy: IfNotPresent
        name: app1
        ports:
        - containerPort: 3000
        volumeMounts:
        - name: app1-code
          mountPath: /code
      volumes:
      - name: app1-code
        persistentVolumeClaim:
          readOnly: false
          claimName: app1-pvc
      serviceAccountName: my-svc-account

Version

colima version HEAD-f32b597
git commit: f32b597

runtime: containerd
arch: x86_64
client: v1.4.0
server: v1.7.2
limactl version 0.18.0
qemu-img version 8.1.2
Copyright (c) 2003-2023 Fabrice Bellard and the QEMU Project developers

Operating System

  • macOS Intel <= 13 (Ventura)
  • macOS Intel >= 14 (Sonoma)
  • Apple Silicon <= 13 (Ventura)
  • Apple Silicon >= 14 (Sonoma)
  • Linux

Output of colima status

INFO[0000] colima is running using macOS Virtualization.Framework
INFO[0000] arch: x86_64
INFO[0000] runtime: containerd
INFO[0000] mountType: virtiofs
INFO[0000] address: 192.168.107.2
INFO[0000] kubernetes: enabled

Reproduction Steps

  1. colima start
    --mount $APP_ROOT_DIR/node-js-app1:/apps/node-js-app1
    --mount $APP_ROOT_DIR/node-js-app2:/apps/node-js-app2
    --mount $APP_ROOT_DIR/node-js-app3:/apps/node-js-app3
    --runtime containerd
    --kubernetes
    --network-address
    --mount-inotify
    --dns 8.8.8.8
    --dns 1.1.1.1
    --vm-type vz
    --mount-type virtiofs
    --very-verbose
    --cpu 4
    --memory 8
    --disk 20

  2. kubectl config use-context colima

  3. kubectl apply -f pv.yaml

  4. kubectl apply -f pvc.yaml

  5. kubectl apply -f app1.yaml

  6. Repeat steps 3-5 for 2 other apps

  7. Tail log for app1 using kubectl logs --follow <APP1_POD>

  8. Make code change to app1 code on the MacOS host

  9. app1 log doesn't update/change as it normally would when code change is made.

Expected behaviour

Hot code loading should work when using containerd runtime and Kubernetes

Additional context

Also noting that the hot loading for containerd runtime should work in all Kubernetes namespaces, not just default as outlined in this thread.

@abiosoft abiosoft added the bug Something isn't working label Nov 30, 2023
@abiosoft
Copy link
Owner

Thanks for the detailed feedback.

@abiosoft
Copy link
Owner

This should be fixed now.

You can wait for the next release or try the development version brew install --head colima.

@flokain
Copy link

flokain commented Oct 1, 2024

I have noticed that this only works with a single container running,

There is a bug with nerdctl where nerdctl inspect .... returns invalid json in form of [{container1}][{container2}]... instead of [{container1},{container2}] its addressed in this bug report on nerdctl: containerd/nerdctl#3476

so should it be reopened?

By the way this requires nerdctl 2.0 which is not shipped with k3s versions at the moment and the nerdctl 1.7.6. version they use doesnt have a backported fix for containerd/nerdctl#2939

colima status -p devenv
INFO[0000] colima [profile=devenv] is running using macOS Virtualization.Framework 
INFO[0000] arch: aarch64                                
INFO[0000] runtime: containerd                          
INFO[0000] mountType: virtiofs                          
INFO[0000] address: 192.168.106.3                       
INFO[0000] kubernetes: enabled

@abiosoft
Copy link
Owner

abiosoft commented Oct 1, 2024

I think a new issue should be opened instead to address this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants