-
Notifications
You must be signed in to change notification settings - Fork 276
Add abstractions for direct HCS interactions #1009
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
internal/vm/vm.go
Outdated
| MemoryManager | ||
| ProcessorManager | ||
| StorageQosManager | ||
| SerialManager | ||
| BootManager | ||
| SCSIManager | ||
| VSMBManager | ||
| Plan9Manager | ||
| VPMemManager | ||
| NetworkManager | ||
| PCIManager | ||
| VMSocketManager | ||
| WindowsConfigManager | ||
| LinuxConfigManager |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is what I'd like the most feedback on. Currently everything HCS supports is filled out, but if we're going through a different virtstack we don't know what is. The idea was to have a call we can make to determine what exactly is supported and then use this information to determine what to do when we'd make a call like AddSCSI or similar.
Can imagine a quick check at the top of every call like
func (uvm *utilityVM) AddSCSIDisk(ctx context.Context, controller uint32, lun uint32, path string, typ vm.SCSIDiskType, readOnly bool) error {
if !uvm.supported.SCSI {
return vm.ErrNotSupported
}
// Rest of the owl...
}This makes it so when calling something we have a known error we can check against (vm.ErrNotSupported) to determine if we should try something else as it's not supported or if it's absolutely necessary we'd just bubble it up and fail. Good example would be if SCSI was supported but VPMem wasn't, during LCOW layer adds we could check if AddVPMem returned vm.ErrNotSupported and immediately just skip any further calls and go try SCSI.
This does make things pretty drop in, and the only extra boilerplate is checking this new error (and only in cases where we can do something useful with it). Although I'm wondering whether it might be better to have IsXSupported methods on each interface so we can know upfront.
@kevpar had suggested another approach in a similar vein of the IsXSupported idea of exposing methods like this on the main UVM interface instead.
SCSI() SCSIManager
VPMem() VPMemManagerand instead you'd call these and either get a nil value if isn't supported or the interfaces that are currently embedded into UVM at the moment if it is. This does add a whole bunch of
scsi := uvm.u.SCSI()
if scsi == nil {
return errors.New("oh no)
} 's everywhere but it might be cleaner. I don't have a strong opinion here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some variant of IsxSupported seems preferable as it should be easier to code to the interface when what is supported can be determined up front. For instance relying on AddSCSIDisk to fail for discovery forces the caller to potentially unwind and take a different path.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reasoning right now is we don't have many things at all that we'd go a different route if it failed or not. Ideally if the client asked for a pci device to be added and the virtstack didn't support it, we just hard fail with no fallback. Same for if we can't mount in a directory they asked for because VSMB's not operational etc. The only one really is during layer adds if VPMem failed for some specific reasons we just move on and try adding the same disk over scsi, which isn't very hard to achieve with what's here as we just add in one extra known error that will let us continue if received.
The current approach keeps the boilerplate to a minimum but it is a bit unintuitive to call something and check the error to determine if it's supported, rather than checking upfront (even if it's not needed). I'm fine with going with adding IsXSupported to everything and calling that before all of the resource operations, or if everyone likes Kevin's idea also we can go that route.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Presumably the caller can just skip the IsSupported checks for cases where there is no fallback so there shouldn't be much extra boiler plate code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good point! Ideally whatever we're talking to would return some sort of "isn't supported" error also anyways if the virtstack doesn't support it, so at least the error strings still informational on the ones with no fallback.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that means both an interface (to keep the root interface small) and a supported check (to allow runtime opt-in). So you might need something like:
vpmem, ok := uvm.u.(VPMEM)
if !ok || !vpmem.VPMEMSupported() {
/// fall back to SCSI
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea.. Even with the "call SCSI() and get back a SCSIManager" approach at least the methods are off of UVM itself and self contained in their own interfaces.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am kind of fond of the
scsi := uvm.u.SCSI()
if scsi == nil {
// abort mission
}
// use SCSIManagerapproach instead of trying to typecast, but I haven't actually given this a go.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we keep adding functionality that's specific to one virt stack or another, it will be really annoying to have to update each UVM implementation to add boilerplate return nil or whatever (times two for the builder case). That's why having a separate interface for the optional stuff will be useful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you referring to typecasting might be a better move to avoid filling out the boilerplate
func (uvm *utilityVM5) PCI() PCIManager {
// Assume capabilities of the virtstack have been
// determined already.
if uvm.capabilities.pci {
return &pci{}
}
return nil
}for every implementation? And then on a
uvm.u.Supported(vm.VPMEM, vm.HotAdd)call we'd do the capabilities check inside instead?
This change brings in a new generated ttrpc service intended to be implemented by a virtstack to facilitate running hypervisor isolated containers. The goal is to have the operations we need and rely on for running hypervisor based containers be abstracted away so we're not calling directly into HCS for everything anymore, but rather make it a configurable option on what underlying virtstack is chosen. These definitions are missing some key Windows features like VSMB but what's here currently is enough to run a Linux guest at the moment with networking. There will be future work to add an implementation of the UVM interface (microsoft#1009) based off of these definitions. Signed-off-by: Daniel Canter <dcanter@microsoft.com>
jstarks
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LMK what you think about the builder approach and the interface approach.
|
@jstarks I think it's a good idea, and very rust-esque too haha. I'll play with this today and ping you on update. |
|
@jstarks Ok ready for eyes again. I like the builder idea, I still don't like how we'll have to typecast as ideally any implementation will have all of the methods filled in as we don't know what we're talking to and we'd rely on them to tell us. Maybe we get to a point where there's things HCS doesn't support that we want that "insert-virtstack" does so not filling out the methods for HCS might make sense but I'm just spitballing. The UVM interface itself is much nicer without all the cruft on it though so I don't see how to have the best of both worlds (maybe having methods on the UVM interface that return the *Manager interfaces themselves like we'd brought up before, but that adds 5+ methods) I also don't think we need this state thing, I kind of liked it during testing but it was a relic of the prototype and likely doesn't buy us much. Before the builder interface it was nice to have branching logic for a single call by checking the state to see if we'd either like to add X to the doc or add X to the vm at runtime. |
| ) | ||
|
|
||
| // SCSIManager manages adding and removing SCSI devices for a Utility VM. | ||
| type SCSIManager interface { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My thinking was that each interface would have its owned FooSupported method, rather than have a single Supported method and having to keep that in sync. But I can see how this will be fairly painful to implement.
@kevpar, what do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's what I'd talked about before. I find it much easier to add a new const and an extra switch case in one method than add Supported calls to everything
|
As discussed here's some examples of how the code that would consume this would change and we can go from here. @katiewasnothere @kevpar @ambarve @anmaxvl @msscotb
// UVM interface definition
type UVM interface {
// ID will return a string identifier for the Utility VM.
ID() string
// State returns the current running state of the Utility VM. e.g. Created, Running, Terminated
State() State
// Create will create the Utility VM in a paused/powered off state with whatever is present in the implementation
// of the interfaces config at the time of the call.
Create(ctx context.Context) error
// Start will power on the Utility VM and put it into a running state. This will boot the guest OS and start all of the
// devices configured on the machine.
Start(ctx context.Context) error
// Stop will shutdown the Utility VM and place it into a terminated state.
Stop(ctx context.Context) error
// Pause will place the Utility VM into a paused state. The guest OS will be halted and any devices will have be in a
// a suspended state. Save can be used to snapshot the current state of the virtual machine, and Resume can be used to
// place the virtual machine back into a running state.
Pause(ctx context.Context) error
// Resume will put a previously paused Utility VM back into a running state. The guest OS will resume operation from the point
// in time it was paused and all devices should be un-suspended.
Resume(ctx context.Context) error
// Save will snapshot the state of the Utility VM at the point in time when the VM was paused.
Save(ctx context.Context) error
// Wait synchronously waits for the Utility VM to shutdown or terminate. A call to stop will trigger this
// to unblock.
Wait() error
// Stats returns statistics about the Utility VM. This includes things like assigned memory, available memory,
// processor runtime etc.
Stats(ctx context.Context) (*stats.VirtualMachineStatistics, error)
// Supported returns if the virt stack supports a given operation on a resource.
Supported(resource Resource, operation ResourceOperation) bool
// ExitError will return any error if the Utility VM exited unexpectedly, or if the Utility VM experienced an
// error after Wait returned, it will return the wait error.
ExitError() error
WindowsConfigManager
LinuxConfigManager
MemoryManager
ProcessorManager
StorageQosManager
SerialManager
BootManager
SCSIManager
VPMemManager
NetworkManager
PCIManager
VMSocketManager
}With this approach it's up to the implementation of UVM to check and return func (uvm *remoteVM) AddSCSIDisk(ctx context.Context, controller uint32, lun uint32, path string, typ vm.SCSIDiskType, readOnly bool) error {
if !uvm.capabilities.SCSI {
return vm.ErrNotSupported
}
var diskType vmservice.DiskType
switch typ {
case vm.SCSIDiskTypeVHD1:
diskType = vmservice.DiskType_SCSI_DISK_TYPE_VHD1
case vm.SCSIDiskTypeVHDX:
diskType = vmservice.DiskType_SCSI_DISK_TYPE_VHDX
case vm.SCSIDiskTypePassThrough:
diskType = vmservice.DiskType_SCSI_DISK_TYPE_PHYSICAL
default:
return fmt.Errorf("unsupported SCSI disk type: %d", typ)
}
disk := &vmservice.SCSIDisk{
Controller: controller,
Lun: lun,
HostPath: path,
Type: diskType,
ReadOnly: readOnly,
}
switch uvm.state {
case vm.StatePreCreated:
uvm.config.DevicesConfig.ScsiDisks = append(uvm.config.DevicesConfig.ScsiDisks, disk)
case vm.StateRunning:
if _, err := uvm.client.ModifyResource(ctx,
&vmservice.ModifyResourceRequest{
Type: vmservice.ModifyType_ADD,
Resource: &vmservice.ModifyResourceRequest_ScsiDisk{
ScsiDisk: disk,
},
},
); err != nil {
return errors.Wrap(err, "failed to add SCSI disk")
}
default:
return errors.New("VM is not in pre-created or running state")
}
return nil
}This keeps the call sights in the uvm package pretty simple, and leaves it up to the caller to handle a returned/wrapped From: Lines 290 to 325 in e481a13
to: var guestReq guestrequest.GuestRequest
if sm.UVMPath != "" {
guestReq = guestrequest.GuestRequest{
ResourceType: guestrequest.ResourceTypeMappedVirtualDisk,
RequestType: requesttype.Add,
}
if uvm.operatingSystem == "windows" {
guestReq.Settings = guestrequest.WCOWMappedVirtualDisk{
ContainerPath: sm.UVMPath,
Lun: sm.LUN,
}
} else {
guestReq.Settings = guestrequest.LCOWMappedVirtualDisk{
MountPath: sm.UVMPath,
Lun: uint8(sm.LUN),
Controller: uint8(sm.Controller),
ReadOnly: readOnly,
}
}
}
var diskType vm.SCSIDiskType
switch attachmentType {
case "VirtualDisk":
switch ext := filepath.Ext(sm.HostPath); ext {
case ".vhd":
diskType = vm.SCSIDiskTypeVHD1
case ".vhdx":
diskType = vm.SCSIDiskTypeVHDX
default:
return nil, fmt.Errorf("unsupported extension for virtual disk: %s", ext)
}
case "PassThru":
diskType = vm.SCSIDiskTypePassThrough
default:
return nil, fmt.Errorf("unsupported SCSI disk type: %s", attachmentType)
}
if err := uvm.u.AddSCSIDisk(ctx, uint32(sm.Controller), uint32(sm.LUN), sm.HostPath, diskType, readOnly); err != nil {
return nil, errors.Wrap(err, "failed to add SCSI disk")
}
if err := uvm.GuestRequest(ctx, guestReq); err != nil {
return nil, errors.Wrap(err, "failed guest request to add SCSI disk: %s")
}Besides the fact that we have to split off the guest request (or we could change
// UVM is an abstraction around a lightweight virtual machine. It houses core lifecycle methods such as Create
// Start, and Stop and also several optional nested interfaces that can be used to determine what the virtual machine
// supports and to configure these resources.
type UVM interface {
// ID will return a string identifier for the Utility VM.
ID() string
// State returns the current running state of the Utility VM. e.g. Created, Running, Terminated
State() State
// Start will power on the Utility VM and put it into a running state. This will boot the guest OS and start all of the
// devices configured on the machine.
Start(ctx context.Context) error
// Stop will shutdown the Utility VM and place it into a terminated state.
Stop(ctx context.Context) error
// Pause will place the Utility VM into a paused state. The guest OS will be halted and any devices will have be in a
// a suspended state. Save can be used to snapshot the current state of the virtual machine, and Resume can be used to
// place the virtual machine back into a running state.
Pause(ctx context.Context) error
// Resume will put a previously paused Utility VM back into a running state. The guest OS will resume operation from the point
// in time it was paused and all devices should be un-suspended.
Resume(ctx context.Context) error
// Save will snapshot the state of the Utility VM at the point in time when the VM was paused.
Save(ctx context.Context) error
// Wait synchronously waits for the Utility VM to shutdown or terminate. A call to stop will trigger this
// to unblock.
Wait() error
// Stats returns statistics about the Utility VM. This includes things like assigned memory, available memory,
// processor runtime etc.
Stats(ctx context.Context) (*stats.VirtualMachineStatistics, error)
// Supported returns if the virt stack supports a given operation on a resource.
Supported(resource Resource, operation ResourceOperation) bool
// ExitError will return any error if the Utility VM exited unexpectedly, or if the Utility VM experienced an
// error after Wait returned, it will return the wait error.
ExitError() error
}From: // Build the HCS request document above..
//
if err := uvm.modify(ctx, SCSIModification); err != nil {
return nil, fmt.Errorf("failed to modify UVM with new SCSI mount: %s", err)
}to: scsi, ok := uvm.u.(SCSIManager)
if !ok || !uvm.u.Supported(vm.SCSI, vm.Add) {
// or some sort of fallback
return vm.ErrNotSupported
}
if err := scsi.AddSCSIDisk(ctx, uint32(sm.Controller), uint32(sm.LUN), sm.HostPath, diskType, readOnly); err != nil {
return nil, errors.Wrap(err, "failed to add SCSI disk")
}
if err := uvm.GuestRequest(ctx, guestReq); err != nil {
return nil, errors.Wrap(err, "failed guest request to add SCSI disk: %s")
}This makes it so that we can leave off methods that we know won't be supported through a specific implementation, but it makes it so that we have to check if the UVM we have actually implements FooManager. We could also do the supported check in the call itself and return
// UVM is an abstraction around a lightweight virtual machine. It houses core lifecycle methods such as Create
// Start, and Stop and also several optional nested interfaces that can be used to determine what the virtual machine
// supports and to configure these resources.
type UVM interface {
// ID will return a string identifier for the Utility VM.
ID() string
// State returns the current running state of the Utility VM. e.g. Created, Running, Terminated
State() State
// Start will power on the Utility VM and put it into a running state. This will boot the guest OS and start all of the
// devices configured on the machine.
Start(ctx context.Context) error
// Stop will shutdown the Utility VM and place it into a terminated state.
Stop(ctx context.Context) error
// Pause will place the Utility VM into a paused state. The guest OS will be halted and any devices will have be in a
// a suspended state. Save can be used to snapshot the current state of the virtual machine, and Resume can be used to
// place the virtual machine back into a running state.
Pause(ctx context.Context) error
// Resume will put a previously paused Utility VM back into a running state. The guest OS will resume operation from the point
// in time it was paused and all devices should be un-suspended.
Resume(ctx context.Context) error
// Save will snapshot the state of the Utility VM at the point in time when the VM was paused.
Save(ctx context.Context) error
// Wait synchronously waits for the Utility VM to shutdown or terminate. A call to stop will trigger this
// to unblock.
Wait() error
// Stats returns statistics about the Utility VM. This includes things like assigned memory, available memory,
// processor runtime etc.
Stats(ctx context.Context) (*stats.VirtualMachineStatistics, error)
// Supported returns if the virt stack supports a given operation on a resource.
Supported(resource Resource, operation ResourceOperation) bool
// ExitError will return any error if the Utility VM exited unexpectedly, or if the Utility VM experienced an
// error after Wait returned, it will return the wait error.
ExitError() error
SCSI() SCSIManager
VPMemManager() VPMemManager
// Rest of the managers..
}so instead of typecasting you'd call There was one other thing that John had suggested that was different to what I had originally. In the original approach I'd went with carrying around a state variable in the UVM implementation that would be to used to determine what to do with an operation. For instance, if the VM hadn't been created yet on calling For example: func (uvm *utilityVM) AddSCSIDisk(ctx context.Context, controller uint32, lun uint32, path string, typ vm.SCSIDiskType, readOnly bool) error {
switch uvm.state {
case vm.StatePreCreated:
return uvm.addSCSIDiskPreCreated(ctx, controller, lun, path, typ, readOnly)
case vm.StateRunning:
return uvm.addSCSIDiskCreatedRunning(ctx, controller, lun, path, typ, readOnly)
default:
return fmt.Errorf("VM is not in valid state for this operation: %d", uvm.state)
}
}It might make more sense to split off the creation of the VM/building of the document into it's own builder interface and handle these cases separately instead of caring about the state to determine the branching logic. The builder interface would re-use the same Manager interfaces (so the same typecasting vs. embed interfaces from approaches 1 through 3 apply here to) and just implement the "VM hasn't been created yet" logic. The only method on the interface itself would be a type UVMBuilder interface {
// Create will create the Utility VM in a paused/powered off state with whatever is present in the implementation
// of the interfaces config at the time of the call.
Create(ctx context.Context) (UVM, error)
}Example of an implementation of func (uvmb *utilityVMBuilder) AddVPMemDevice(ctx context.Context, id uint32, path string, readOnly bool, imageFormat vm.VPMemImageFormat) error {
if uvmb.doc.VirtualMachine.Devices.VirtualPMem == nil {
return errors.New("VPMem controller has not been added")
}
imageFormatString, err := getVPMemImageFormatString(imageFormat)
if err != nil {
return err
}
uvmb.doc.VirtualMachine.Devices.VirtualPMem.Devices[strconv.Itoa(int(id))] = hcsschema.VirtualPMemDevice{
HostPath: path,
ReadOnly: readOnly,
ImageFormat: imageFormatString,
}
return nil
}
func (uvm *utilityVM) AddVPMemDevice(ctx context.Context, id uint32, path string, readOnly bool, imageFormat vm.VPMemImageFormat) error {
imageFormatString, err := getVPMemImageFormatString(imageFormat)
if err != nil {
return err
}
request := &hcsschema.ModifySettingRequest{
RequestType: requesttype.Add,
Settings: hcsschema.VirtualPMemDevice{
HostPath: path,
ReadOnly: readOnly,
ImageFormat: imageFormatString,
},
ResourcePath: fmt.Sprintf(resourcepaths.VPMemControllerResourceFormat, id),
}
return uvm.cs.Modify(ctx, request)
}This seems like a nice approach, but I didn't mind the state approach really. |
|
1. Type assertions seem like the correct approach for this kind of problem. Every implementation of a virt stack should not have to implement the superset of all interfaces.
2. The builder model looks safer because it is explicit and clearly separates static from dynamic state.
3. The question of having the right abstraction (AddLayer) is important but somewhat orthogonal to this PR.
From: Danny Canter ***@***.***>
Sent: Monday, May 3, 2021 9:08 PM
To: microsoft/hcsshim ***@***.***>
Cc: Scott Brender ***@***.***>; Mention ***@***.***>
Subject: Re: [microsoft/hcsshim] Add abstractions for direct HCS interactions (#1009)
As discussed here's some examples of how the code that would consume this would change and we can go from here. @katiewasnothere<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fkatiewasnothere&data=04%7C01%7Cscotb%40microsoft.com%7C8620df62403a488e8f7608d90eb22d41%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637556980711670465%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=tBfixphPYY3tWBS%2F3ujepWNwzThLbOKwxM5cfIs4EPg%3D&reserved=0> @kevpar<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fkevpar&data=04%7C01%7Cscotb%40microsoft.com%7C8620df62403a488e8f7608d90eb22d41%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637556980711670465%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=s4fKiRPoxeinJr0C9t%2FqKzyqw2ZyHebOFcEzcr%2FK%2Bl8%3D&reserved=0> @ambarve<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fambarve&data=04%7C01%7Cscotb%40microsoft.com%7C8620df62403a488e8f7608d90eb22d41%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637556980711680460%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=7kIq2uGyC2E%2BHfUyDbaouedv0X6ytcuRt3iNiW9Yj7I%3D&reserved=0> @anmaxvl<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fanmaxvl&data=04%7C01%7Cscotb%40microsoft.com%7C8620df62403a488e8f7608d90eb22d41%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637556980711680460%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=S92eIxQ4t4ohFZTl4hp1tt%2FkViwD23WBwoOV8YJmW94%3D&reserved=0> @msscotb<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmsscotb&data=04%7C01%7Cscotb%40microsoft.com%7C8620df62403a488e8f7608d90eb22d41%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637556980711690454%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=zYMV46Q8te4JlpUcwNB9HhRLpxrV3TvP2ox56%2B1L%2FP0%3D&reserved=0>
1. Have all of the interfaces embedded on UVM itself and check if the operation is supported inside the calls themselves.
// UVM interface definition
type UVM interface {
// ID will return a string identifier for the Utility VM.
ID() string
// State returns the current running state of the Utility VM. e.g. Created, Running, Terminated
State() State
// Create will create the Utility VM in a paused/powered off state with whatever is present in the implementation
// of the interfaces config at the time of the call.
Create(ctx context.Context) error
// Start will power on the Utility VM and put it into a running state. This will boot the guest OS and start all of the
// devices configured on the machine.
Start(ctx context.Context) error
// Stop will shutdown the Utility VM and place it into a terminated state.
Stop(ctx context.Context) error
// Pause will place the Utility VM into a paused state. The guest OS will be halted and any devices will have be in a
// a suspended state. Save can be used to snapshot the current state of the virtual machine, and Resume can be used to
// place the virtual machine back into a running state.
Pause(ctx context.Context) error
// Resume will put a previously paused Utility VM back into a running state. The guest OS will resume operation from the point
// in time it was paused and all devices should be un-suspended.
Resume(ctx context.Context) error
// Save will snapshot the state of the Utility VM at the point in time when the VM was paused.
Save(ctx context.Context) error
// Wait synchronously waits for the Utility VM to shutdown or terminate. A call to stop will trigger this
// to unblock.
Wait() error
// Stats returns statistics about the Utility VM. This includes things like assigned memory, available memory,
// processor runtime etc.
Stats(ctx context.Context) (*stats.VirtualMachineStatistics, error)
// Supported returns if the virt stack supports a given operation on a resource.
Supported(resource Resource, operation ResourceOperation) bool
// ExitError will return any error if the Utility VM exited unexpectedly, or if the Utility VM experienced an
// error after Wait returned, it will return the wait error.
ExitError() error
WindowsConfigManager
LinuxConfigManager
MemoryManager
ProcessorManager
StorageQosManager
SerialManager
BootManager
SCSIManager
VPMemManager
NetworkManager
PCIManager
VMSocketManager
}
With this approach it's up to the implementation of UVM to check and return vm.ErrNotSupported in the calls themselves.
So:
func (uvm *remoteVM) AddSCSIDisk(ctx context.Context, controller uint32, lun uint32, path string, typ vm.SCSIDiskType, readOnly bool) error {
if !uvm.capabilities.SCSI {
return vm.ErrNotSupported
}
var diskType vmservice.DiskType
switch typ {
case vm.SCSIDiskTypeVHD1:
diskType = vmservice.DiskType_SCSI_DISK_TYPE_VHD1
case vm.SCSIDiskTypeVHDX:
diskType = vmservice.DiskType_SCSI_DISK_TYPE_VHDX
case vm.SCSIDiskTypePassThrough:
diskType = vmservice.DiskType_SCSI_DISK_TYPE_PHYSICAL
default:
return fmt.Errorf("unsupported SCSI disk type: %d", typ)
}
disk := &vmservice.SCSIDisk{
Controller: controller,
Lun: lun,
HostPath: path,
Type: diskType,
ReadOnly: readOnly,
}
switch uvm.state {
case vm.StatePreCreated:
uvm.config.DevicesConfig.ScsiDisks = append(uvm.config.DevicesConfig.ScsiDisks, disk)
case vm.StateRunning:
if _, err := uvm.client.ModifyResource(ctx,
&vmservice.ModifyResourceRequest{
Type: vmservice.ModifyType_ADD,
Resource: &vmservice.ModifyResourceRequest_ScsiDisk{
ScsiDisk: disk,
},
},
); err != nil {
return errors.Wrap(err, "failed to add SCSI disk")
}
default:
return errors.New("VM is not in pre-created or running state")
}
return nil
}
This keeps the call sights in the uvm package pretty simple, and leaves it up to the caller to handle a returned/wrapped vm.ErrNotSupported. Not exactly the greatest abstraction (or really one at all) to have to fill out 20+ methods, but this is the most straightforward for the user of the interface I feel.
From: https://github.com/microsoft/hcsshim/blob/e481a139d6e1bfcd216372e5ea62cfd6fb7a6a2c/internal/uvm/scsi.go#L290-L325<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmicrosoft%2Fhcsshim%2Fblob%2Fe481a139d6e1bfcd216372e5ea62cfd6fb7a6a2c%2Finternal%2Fuvm%2Fscsi.go%23L290-L325&data=04%7C01%7Cscotb%40microsoft.com%7C8620df62403a488e8f7608d90eb22d41%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637556980711690454%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=RKCsCPyHdNXpE5QW2DaYZH7Unqc3FhfzvPw%2BAOQ7BOo%3D&reserved=0>
to:
var guestReq guestrequest.GuestRequest
if sm.UVMPath != "" {
guestReq = guestrequest.GuestRequest{
ResourceType: guestrequest.ResourceTypeMappedVirtualDisk,
RequestType: requesttype.Add,
}
if uvm.operatingSystem == "windows" {
guestReq.Settings = guestrequest.WCOWMappedVirtualDisk{
ContainerPath: sm.UVMPath,
Lun: sm.LUN,
}
} else {
guestReq.Settings = guestrequest.LCOWMappedVirtualDisk{
MountPath: sm.UVMPath,
Lun: uint8(sm.LUN),
Controller: uint8(sm.Controller),
ReadOnly: readOnly,
}
}
}
var diskType vm.SCSIDiskType
switch attachmentType {
case "VirtualDisk":
switch ext := filepath.Ext(sm.HostPath); ext {
case ".vhd":
diskType = vm.SCSIDiskTypeVHD1
case ".vhdx":
diskType = vm.SCSIDiskTypeVHDX
default:
return nil, fmt.Errorf("unsupported extension for virtual disk: %s", ext)
}
case "PassThru":
diskType = vm.SCSIDiskTypePassThrough
default:
return nil, fmt.Errorf("unsupported SCSI disk type: %s", attachmentType)
}
if err := uvm.u.AddSCSIDisk(ctx, uint32(sm.Controller), uint32(sm.LUN), sm.HostPath, diskType, readOnly); err != nil {
return nil, errors.Wrap(err, "failed to add SCSI disk")
}
if err := uvm.GuestRequest(ctx, guestReq); err != nil {
return nil, errors.Wrap(err, "failed guest request to add SCSI disk: %s")
}
Besides the fact that we have to split off the guest request (or we could change uvm.modify to not be hcs'y also) and the disk type setup, the code is pretty similar.
1. Keep the UVM interface short and sweet and typecast. We'll also need to either do an explicit runtime check to check if the operation is supported (the same operation we would do in the calls themselves in approach 1) or simply check in the calls themselves like approach 1.
// UVM is an abstraction around a lightweight virtual machine. It houses core lifecycle methods such as Create
// Start, and Stop and also several optional nested interfaces that can be used to determine what the virtual machine
// supports and to configure these resources.
type UVM interface {
// ID will return a string identifier for the Utility VM.
ID() string
// State returns the current running state of the Utility VM. e.g. Created, Running, Terminated
State() State
// Start will power on the Utility VM and put it into a running state. This will boot the guest OS and start all of the
// devices configured on the machine.
Start(ctx context.Context) error
// Stop will shutdown the Utility VM and place it into a terminated state.
Stop(ctx context.Context) error
// Pause will place the Utility VM into a paused state. The guest OS will be halted and any devices will have be in a
// a suspended state. Save can be used to snapshot the current state of the virtual machine, and Resume can be used to
// place the virtual machine back into a running state.
Pause(ctx context.Context) error
// Resume will put a previously paused Utility VM back into a running state. The guest OS will resume operation from the point
// in time it was paused and all devices should be un-suspended.
Resume(ctx context.Context) error
// Save will snapshot the state of the Utility VM at the point in time when the VM was paused.
Save(ctx context.Context) error
// Wait synchronously waits for the Utility VM to shutdown or terminate. A call to stop will trigger this
// to unblock.
Wait() error
// Stats returns statistics about the Utility VM. This includes things like assigned memory, available memory,
// processor runtime etc.
Stats(ctx context.Context) (*stats.VirtualMachineStatistics, error)
// Supported returns if the virt stack supports a given operation on a resource.
Supported(resource Resource, operation ResourceOperation) bool
// ExitError will return any error if the Utility VM exited unexpectedly, or if the Utility VM experienced an
// error after Wait returned, it will return the wait error.
ExitError() error
}
From:
// Build the HCS request document above..
//
if err := uvm.modify(ctx, SCSIModification); err != nil {
return nil, fmt.Errorf("failed to modify UVM with new SCSI mount: %s", err)
}
to:
scsi, ok := uvm.u.(SCSIManager)
if !ok || !uvm.u.Supported(vm.SCSI, vm.Add) {
// or some sort of fallback
return vm.ErrNotSupported
}
if err := scsi.AddSCSIDisk(ctx, uint32(sm.Controller), uint32(sm.LUN), sm.HostPath, diskType, readOnly); err != nil {
return nil, errors.Wrap(err, "failed to add SCSI disk")
}
if err := uvm.GuestRequest(ctx, guestReq); err != nil {
return nil, errors.Wrap(err, "failed guest request to add SCSI disk: %s")
}
This makes it so that we can leave off methods that we know won't be supported through a specific implementation, but it makes it so that we have to check if the UVM we have actually implements FooManager. We could also do the supported check in the call itself and return vm.ErrNotSupported there as well like approach 1.
1. Keep the UVM interface semi small but add methods to retrieve the Manager types directly, where the calls would return nil if the methods aren't defined. Really only a QOL to avoid the typecasts everywhere.
// UVM is an abstraction around a lightweight virtual machine. It houses core lifecycle methods such as Create
// Start, and Stop and also several optional nested interfaces that can be used to determine what the virtual machine
// supports and to configure these resources.
type UVM interface {
// ID will return a string identifier for the Utility VM.
ID() string
// State returns the current running state of the Utility VM. e.g. Created, Running, Terminated
State() State
// Start will power on the Utility VM and put it into a running state. This will boot the guest OS and start all of the
// devices configured on the machine.
Start(ctx context.Context) error
// Stop will shutdown the Utility VM and place it into a terminated state.
Stop(ctx context.Context) error
// Pause will place the Utility VM into a paused state. The guest OS will be halted and any devices will have be in a
// a suspended state. Save can be used to snapshot the current state of the virtual machine, and Resume can be used to
// place the virtual machine back into a running state.
Pause(ctx context.Context) error
// Resume will put a previously paused Utility VM back into a running state. The guest OS will resume operation from the point
// in time it was paused and all devices should be un-suspended.
Resume(ctx context.Context) error
// Save will snapshot the state of the Utility VM at the point in time when the VM was paused.
Save(ctx context.Context) error
// Wait synchronously waits for the Utility VM to shutdown or terminate. A call to stop will trigger this
// to unblock.
Wait() error
// Stats returns statistics about the Utility VM. This includes things like assigned memory, available memory,
// processor runtime etc.
Stats(ctx context.Context) (*stats.VirtualMachineStatistics, error)
// Supported returns if the virt stack supports a given operation on a resource.
Supported(resource Resource, operation ResourceOperation) bool
// ExitError will return any error if the Utility VM exited unexpectedly, or if the Utility VM experienced an
// error after Wait returned, it will return the wait error.
ExitError() error
SCSI() SCSIManager
VPMemManager() VPMemManager
// Rest of the managers..
}
so instead of typecasting you'd call uvm.u.SCSI(), check if it's nil and then the rest of approach 2 is the same.
There was one other thing that John had suggested that was different to what I had originally. In the original approach I'd went with carrying around a state variable in the UVM implementation that would be to used to determine what to do with an operation. For instance, if the VM hadn't been created yet on calling AddFoo() the foo would be added to the VMs initial creation document. If the VM was already running then it would be added at runtime.
For example:
func (uvm *utilityVM) AddSCSIDisk(ctx context.Context, controller uint32, lun uint32, path string, typ vm.SCSIDiskType, readOnly bool) error {
switch uvm.state {
case vm.StatePreCreated:
return uvm.addSCSIDiskPreCreated(ctx, controller, lun, path, typ, readOnly)
case vm.StateRunning:
return uvm.addSCSIDiskCreatedRunning(ctx, controller, lun, path, typ, readOnly)
default:
return fmt.Errorf("VM is not in valid state for this operation: %d", uvm.state)
}
}
It might make more sense to split off the creation of the VM/building of the document into it's own builder interface and handle these cases separately instead of caring about the state to determine the branching logic. The builder interface would re-use the same Manager interfaces (so the same typecasting vs. embed interfaces from approaches 1 through 3 apply here to) and just implement the "VM hasn't been created yet" logic. The only method on the interface itself would be a Create method that would return a UVM type with whatever was built up from the Manager calls beforehand.
type UVMBuilder interface {
// Create will create the Utility VM in a paused/powered off state with whatever is present in the implementation
// of the interfaces config at the time of the call.
Create(ctx context.Context) (UVM, error)
}
Example of an implementation of AddVPMemDevice for both UVM and UVMBuilder:
func (uvmb *utilityVMBuilder) AddVPMemDevice(ctx context.Context, id uint32, path string, readOnly bool, imageFormat vm.VPMemImageFormat) error {
if uvmb.doc.VirtualMachine.Devices.VirtualPMem == nil {
return errors.New("VPMem controller has not been added")
}
imageFormatString, err := getVPMemImageFormatString(imageFormat)
if err != nil {
return err
}
uvmb.doc.VirtualMachine.Devices.VirtualPMem.Devices[strconv.Itoa(int(id))] = hcsschema.VirtualPMemDevice{
HostPath: path,
ReadOnly: readOnly,
ImageFormat: imageFormatString,
}
return nil
}
func (uvm *utilityVM) AddVPMemDevice(ctx context.Context, id uint32, path string, readOnly bool, imageFormat vm.VPMemImageFormat) error {
imageFormatString, err := getVPMemImageFormatString(imageFormat)
if err != nil {
return err
}
request := &hcsschema.ModifySettingRequest{
RequestType: requesttype.Add,
Settings: hcsschema.VirtualPMemDevice{
HostPath: path,
ReadOnly: readOnly,
ImageFormat: imageFormatString,
},
ResourcePath: fmt.Sprintf(resourcepaths.VPMemControllerResourceFormat, id),
}
return uvm.cs.Modify(ctx, request)
}
This seems like a nice approach, but I didn't mind the state approach really.
-
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmicrosoft%2Fhcsshim%2Fpull%2F1009%23issuecomment-831670834&data=04%7C01%7Cscotb%40microsoft.com%7C8620df62403a488e8f7608d90eb22d41%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637556980711700448%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=pmzwCJ%2B3sWgDYSUkc4DtsE5uh9VlSt0iIgCWBS%2FSdMU%3D&reserved=0>, or unsubscribe<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FANGLLNUH5MAYP6TZBH6SRQTTL5XJHANCNFSM43MFWVEA&data=04%7C01%7Cscotb%40microsoft.com%7C8620df62403a488e8f7608d90eb22d41%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637556980711710445%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=XuuyTHI6Ww%2BY2iN9Ub1FCqFj%2F4Fd%2Bb1mXe%2BjNWrTcAc%3D&reserved=0>.
|
Add vm package, uvm and uvmbuilder interfaces to abstract away the operations that we call directly into hcs for. This will be useful for having these operations be performed by a different virtstack so long as it supports what is needed for containers. Signed-off-by: Daniel Canter <dcanter@microsoft.com>
msscotb
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
anmaxvl
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
|
containerd likes to use the Opts pattern, which to me were similar to the builder pattern, but without an actual builder interface. this way I guess you wouldn't need the builder interface at all. so I guess the opts would be applied on the doc, that's passed to hcs. |
|
@anmaxvl I was just thinking about something similar to this, although as we're re-using the manager interfaces it becomes a pain. It's a bit weird how it's up to the impl of UVM to carry around some sort of internal doc on the underlying struct with this approach right now. I might play around with this in a coming PR but I like the feedback and idea |
Related work items: microsoft#388, microsoft#389, microsoft#393, microsoft#394, microsoft#395, microsoft#396, microsoft#397, microsoft#398, microsoft#399, microsoft#400, microsoft#401, microsoft#402, microsoft#403, microsoft#404, microsoft#405, microsoft#931, microsoft#973, microsoft#1001, microsoft#1003, microsoft#1004, microsoft#1005, microsoft#1006, microsoft#1007, microsoft#1009, microsoft#1010, microsoft#1012, microsoft#1013, microsoft#1014, microsoft#1015, microsoft#1016, microsoft#1017, microsoft#1019, microsoft#1021, microsoft#1022, microsoft#1024, microsoft#1025, microsoft#1027, microsoft#1028, microsoft#1029, microsoft#1030, microsoft#1031, microsoft#1033
This change brings in a new generated ttrpc service intended to be implemented by a virtstack to facilitate running hypervisor isolated containers. The goal is to have the operations we need and rely on for running hypervisor based containers be abstracted away so we're not calling directly into HCS for everything anymore, but rather make it a configurable option on what underlying virtstack is chosen. These definitions are missing some key Windows features like VSMB but what's here currently is enough to run a Linux guest at the moment with networking. There will be future work to add an implementation of the UVM interface (microsoft#1009) based off of these definitions. Signed-off-by: Daniel Canter <dcanter@microsoft.com>
Add abstractions for direct HCS interactions
Add vm package and uvm interface to abstract away the operations that we call directly into
hcs for. This will be useful for having these operations be performed by a different virtstack
so long as it supports what is needed for containers.
Signed-off-by: Daniel Canter dcanter@microsoft.com