Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changing VM hardware (memory, cores) has no effect after provisioning #4979

Closed
blueelvis opened this issue Aug 4, 2019 · 7 comments
Closed
Labels
area/guest-vm General configuration issues with the minikube guest VM help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@blueelvis
Copy link
Contributor

blueelvis commented Aug 4, 2019

Pretty sure this would be the case with other drivers as well.
The exact command to reproduce the issue:

.\minikube-windows-amd64.exe start --vm-driver=hyperv --hyperv-virtual-switch="Default Switch" --memory=1700
.\minikube-windows-amd64.exe start --vm-driver=hyperv --hyperv-virtual-switch="Default Switch" --memory=1500 --cpus=1

The cluster initially gets provisioned with the memory and CPU but if I start it again with new parameters, the VM is just started with the original values as you can see from the below screenshot -
image

I think this should be configurable. The only thing which should not be configurable in size is the hard disk size.

The operating system version:
Windows 10 Enterprise.

Pretty sure this would be applicable across all drivers and all the operating systems.

@tstromberg
Copy link
Contributor

I'm not sure if the machine drivers infrastructure allows on-the-fly hardware changing. I'm also not sure if there is a way we can detect whether the current VM has the correct hardware settings.

What we can do however is compare the current VM config with the previously saved one, and recreate the VM, preserving the new disk if at all possible. If preserving the disk isn't possible, we should prompt the user or require them to add a flag for deletion.

@tstromberg tstromberg added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label Aug 6, 2019
@tstromberg tstromberg changed the title If the minikube VM is already provisioned, changing CPU and Memory has no effect Changing VM hardware spec (memory, cores) has no effect after provisioning Aug 6, 2019
@tstromberg tstromberg changed the title Changing VM hardware spec (memory, cores) has no effect after provisioning Changing VM hardware (memory, cores) has no effect after provisioning Aug 6, 2019
@afbjorklund
Copy link
Collaborator

Normally you would change this in the machine JSON file, and then restart the machine...

I think minikube could do the same thing with the config, after the Load but before the Save ?

@afbjorklund afbjorklund added the area/guest-vm General configuration issues with the minikube guest VM label Aug 6, 2019
@afbjorklund
Copy link
Collaborator

This probably depends a lot on the driver... Some create external config, some pass parameters.

If it does create an external object, then it does need the aforementioned synchronization step.

@sharifelgamal sharifelgamal added the kind/feature Categorizes issue or PR as related to a new feature. label Sep 20, 2019
@sharifelgamal sharifelgamal added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Dec 16, 2019
@sharifelgamal
Copy link
Collaborator

This is probably still worth looking at.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 16, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 15, 2020
@priyawadhwa
Copy link

Hey @blueelvis --

Hopefully it's OK if I close this - there wasn't enough information to make it actionable, and some time has already passed. If you are able to provide additional details, you may reopen it at any point by adding /reopen to your comment.

Here is additional information that may be helpful to us:

  • Whether the issue occurs with the latest minikube release
  • The exact minikube start command line used
  • The full output of the minikube start command, preferably with --alsologtostderr -v=3 for extra logging.
  • The full output of minikube logs

Thank you for sharing your experience!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/guest-vm General configuration issues with the minikube guest VM help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

7 participants