Skip to content
This repository has been archived by the owner on Dec 7, 2023. It is now read-only.

Redesign image and kernel status for VMs #298

Closed
luxas opened this issue Aug 8, 2019 · 3 comments · Fixed by #307
Closed

Redesign image and kernel status for VMs #298

luxas opened this issue Aug 8, 2019 · 3 comments · Fixed by #307
Assignees
Labels
api/v1alpha2 kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/design Categorizes issue or PR as related to the design of the project. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@luxas
Copy link
Contributor

luxas commented Aug 8, 2019

For images and kernels, this is the status object we have:

status:
  image:
    id: sha256:4484643658ed8004866b8b89d3683c4d1caaf9791649268b10bb2b0908175289
    repoDigests:
    - node@sha256:5ff6be48fb58391dcaa4875d2c9513b34762a11982934fe9ff0f8dfb07eb2ff1
    size: 151572053B
  kernel:
    id: sha256:5e5165af3e5dda9371abce633189eed1f80f9c493635e37264c6783fd5e0f6c9
    repoDigests:
    - weaveworks/ignite-kernel@sha256:18ee1005bb3881faf09059038eca8b298d9d7598cde675c4a900e275fe0a2454
    size: 51377512B

vs Kubernetes way of doing it:

local image:

containerStatuses:
- containerID: docker://d1ae64e547e51eed8d52c10d79474eee1d0152cb747ca2877f8aba456242c1a4
  image: local-image:latest
  imageID: docker://sha256:1834146a110eb7ac731b1f0fa87bbc001e8dd493c11b1ee58e549981cb6ecc8a

pullable image:

containerStatuses:
- containerID: docker://6ecf188fe796a0a970775db4946cc84464027199d80316cc395234c877225e33
  image: weaveworks/weave-kube:2.5.2
  imageID: docker-pullable://weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100

Proposed design:

status:
  runtime:
    # This is the ID of the container/isolation mechanism that firecracker is executing in. The prefix here implies which runtime is being used.
    id: docker://6ecf188fe796a0a970775db4946cc84464027199d80316cc395234c877225e33
    id: containerd://d1ae64e547e51eed8d52c10d79474eee1d0152cb747ca2877f8aba456242c1a4
    id: jailer://5e5165af3e5dda9371abce633189eed1f80f9c493635e37264c6783fd5e0f6c9
  image: # Also the same field for kernel:
    # This is what OCI content the image was built from
    # In the case of a local docker image (not pushed); the prefix will be docker://sha256:<id>
    id: docker://sha256:1834146a110eb7ac731b1f0fa87bbc001e8dd493c11b1ee58e549981cb6ecc8a
    # In the case that the image was available from a registry, it has got a so called "Repo Digest" field.
    # With this "Repo Digest", it is possible to know exactly what content was used for the build. This 
    # is part of the OCI specification now, so we can use a fully-qualified image name + digest to 
    # exactly describe the content:
    id: oci://docker.io/library/node@sha256:5ff6be48fb58391dcaa4875d2c9513b34762a11982934fe9ff0f8dfb07eb2ff1
    # We might not need the size field, but we'll keep it here for now.
    size: 51377512B

This design:
a) Allows for multiple container runtimes (in the .status.runtime.id field prefix)
b) Supports unpushed docker images without a calculated distribution digest for compability using the docker:// prefix in .status.image.id and .status.kernel.id
c) Works with any container runtime that implements the OCI distribution spec using the oci:// prefix.

@luxas luxas added api/v1alpha2 kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. kind/design Categorizes issue or PR as related to the design of the project. labels Aug 8, 2019
@luxas luxas added this to the v1alpha2 milestone Aug 8, 2019
@luxas luxas self-assigned this Aug 8, 2019
@twelho
Copy link
Contributor

twelho commented Aug 9, 2019

Looks good all in all, a couple of points:

  • Having the runtime container/isolation ID on a per-VM basis enables us to potentially run multiple isolators at the same time. I see this as a benefit for special usecases.
  • Currently we have
spec:
  image:
    ociClaim:
      ref: weaveworks/ignite-ubuntu:latest
      type: Docker

in the specification (also for kernels). .spec.image.type is unused, as the image is requested from the runtime. Should we change it to:

spec:
  image:
    # For images being pulled by a OCI provider, we use the provider as the prefix and
    # a reference to specify the image including the tag
    ociRef: docker://weaveworks/ignite-ubuntu:latest
    # To use another provider, swap out the prefix similar to the runtime
    ociRef: containerd://weaveworks/ignite-ubuntu:latest
    # Reserve the possibility to add fields for other details

to align with the proposed design for status? The prefix enables to select the OCI provider used to retrieve the image.

@twelho
Copy link
Contributor

twelho commented Aug 9, 2019

.status.image.size is necessary if we want to know the source size, as that information is otherwise lost on conversion to Ignite's internal format.

@luxas
Copy link
Contributor Author

luxas commented Aug 9, 2019

After thinking about this, yes, I think it makes sense, but I don't want to build in what container runtime to use in the VM spec, at least not yet.

So let's simplify the current state to:

spec:
  image: #and kernel:
    # Any image that is following the OCI specification is valid here.
    ociRef: weaveworks/ignite-ubuntu:latest
    ociRef: weaveworks/ignite-ubuntu@sha256:abc
    # Reserve the possibility to add fields for other details

I'll open a new issue

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
api/v1alpha2 kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/design Categorizes issue or PR as related to the design of the project. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants