-
Notifications
You must be signed in to change notification settings - Fork 554
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vm based args in spec?? #964
Comments
so these args are for that container runtime instance. If the args are changed, then it's a new/different runtime instance, right? |
What would be the difference between this and args used to exec runc then? Its weird. |
@sameo Could you take a look at this? |
Args to runc would be equivalent to args to kata-runtime. But these args
the args to whatever backing hypervisor. Maybe not a hard requirement, but
good for reproducibility and audit.
…On Thu, May 24, 2018, 08:38 Michael Crosby ***@***.***> wrote:
What would be the difference between this and args used to exec runc then?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#964 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAEF6Z9CqXKElWLwy7I7Uda-AMyIRpTUks5t1tPvgaJpZM4THjlC>
.
|
So these VM runtimes wrap another thing? |
The hypervisor (KVM, Xen, ESX, etc) does not read and process the spec. The spec is processed by the runtime itself, exactly like runc. The hypervisor creates and manages the VM that's going to host the container workload/process. You could think about the hypervisor as a different isolation and resource sharing API than respectively namespaces and cgroups. So intead of calling into a set of host kernel APIs, you call into an hypervisor API. OCI VM runtimes carry a set of default hypervisor arguments (static and dynamic) for each hypervisor they support. They're different from the set of arguments you'd pass to runc as they only specify how the hypervisor should create the VM that the runtime is going to control in order to manage the container workload inside it. Does that clarify things a little? |
How useful are these args given that in many cases most of the parameters are dynamic, added via QMP (in the qemu case)? So long as this is optional, it seems reasonable to me. |
Also, this resolution is needed to prep for a release |
@sameo - For me it'd be helpful to have a more specific example use-case for this field. I'll try to add this here, PTAL. @vbatts @crosbymichael -- In the kata case, there are many items which we end up configuring on a per-node basis through a configuration.toml. Example of this is at [1]. Some potentially relevant items which could be used, and thus configured on a per container basis optionally: These could be configured on a per workload basis. [1] - https://github.com/kata-containers/runtime/blob/master/cli/config/configuration.toml.in |
@bergwolf PTAL. |
@egernst If we think of highly customized guest configs for different workload/need on a per pod sandbox basis, I'm afraid there are just too many of them for each hypervisor type. E.g., the list you gave are just part of the configurations for QEMU. They do not make sense for some other hypervisors which would have a different set of configurations. IWO, I tend to agree with @vbatts that we put them in labels or annotations. In kata, we can define and check for those labels/annotations, and override the default per node configuration with the provided ones. |
@bergwolf I agree. |
I'm sure we could put some effort into abstracting some common arguments across most hypervisors, but we would need to handle a labels based overriding mechanism anyway. This is a very powerful mechanism for customizing your virtualizer per pod/workload. |
We (Microsoft/hcsshim) have been pretty exclusively been using annotations to override any default behavior. But we do try and honor the spec itself if it also has fields. So for example a hypervisor container that has a |
There was an existing comment in the VM PR located here that was not resolved before merge:
#949 (comment)
The overall issues is why do vm args need to be specified in the spec when the hypervisor is the one being invoked to read/process the spec.
The text was updated successfully, but these errors were encountered: