Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minikube creating >1000 files in /tmp (1-3 per run) #2918

Closed
paulcarlton opened this issue Jun 20, 2018 · 19 comments
Closed

minikube creating >1000 files in /tmp (1-3 per run) #2918

paulcarlton opened this issue Jun 20, 2018 · 19 comments
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. r/2019q2 Issue was last reviewed 2019q2

Comments

@paulcarlton
Copy link

paulcarlton commented Jun 20, 2018

I'm seeing lots of files being created in /tmp when I run minikube

pcarlton@pcarlton1:~/src/github.hpe.com/paul-carlton2/dev-stuff$ cat /tmp/minikube.pcarlton1.pcarlton.log.WARNING.20180620-093000.14780 
Log file created at: 2018/06/20 09:30:00
Running on machine: pcarlton1
Binary: Built with gc go1.9.1 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
pcarlton@pcarlton1:~/src/github.hpe.com/paul-carlton2/dev-stuff$ ls -l /tmp | grep minikube.pcarlton1.pcarlton.log | wc -l
1452
pcarlton@pcarlton1:~/src/github.hpe.com/paul-carlton2/dev-stuff$ cat /tmp/minikube.pcarlton1.pcarlton.log.INFO.20180620-093015.15416
Log file created at: 2018/06/20 09:30:15
Running on machine: pcarlton1
Binary: Built with gc go1.9.1 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0620 09:30:15.721658   15416 notify.go:109] Checking for updates...
pcarlton@pcarlton1:~/src/github.hpe.com/paul-carlton2/dev-stuff$ ls -l /tmp | grep minikube.pcarlton1.pcarlton.log | wc -l
1480

minikube version
minikube version: v0.28.0
pcarlton@pcarlton1:~/src/github.hpe.com/paul-carlton2/dev-stuff$ echo "";

pcarlton@pcarlton1:~/src/github.hpe.com/paul-carlton2/dev-stuff$ echo "OS:";
OS:
pcarlton@pcarlton1:~/src/github.hpe.com/paul-carlton2/dev-stuff$ cat /etc/os-release
NAME="Ubuntu"
VERSION="16.04.4 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.4 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
pcarlton@pcarlton1:~/src/github.hpe.com/paul-carlton2/dev-stuff$ echo "";

pcarlton@pcarlton1:~/src/github.hpe.com/paul-carlton2/dev-stuff$ echo "VM driver": 
VM driver:
pcarlton@pcarlton1:~/src/github.hpe.com/paul-carlton2/dev-stuff$ grep DriverName ~/.minikube/machines/minikube/config.json
    "DriverName": "kvm2",
pcarlton@pcarlton1:~/src/github.hpe.com/paul-carlton2/dev-stuff$ echo "";

pcarlton@pcarlton1:~/src/github.hpe.com/paul-carlton2/dev-stuff$ echo "ISO version";
ISO version
pcarlton@pcarlton1:~/src/github.hpe.com/paul-carlton2/dev-stuff$ grep -i ISO ~/.minikube/machines/minikube/config.json
        "Boot2DockerURL": "file:///home/pcarlton/.minikube/cache/iso/minikube-v0.28.0.iso",
        "ISO": "/home/pcarlton/.minikube/machines/minikube/boot2docker.iso",

how can I fix this?

@Sicaine
Copy link

Sicaine commented Jun 27, 2018

Is this a problem for you when a tool creates files in /tmp?

@paulcarlton
Copy link
Author

A few files are ok but we are talking about hundreds of files
ls -l /tmp | grep minikube.pcarlton1.pcarlton.log | wc -l
1452

@tstromberg tstromberg changed the title minikube creating lots of files in /tmp minikube creating >1000 files in /tmp Sep 19, 2018
@tstromberg tstromberg added os/linux co/kvm2-driver KVM2 driver related issues kind/bug Categorizes issue or PR as related to a bug. and removed os/linux labels Sep 19, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 18, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 17, 2019
@tstromberg
Copy link
Contributor

tstromberg commented Jan 24, 2019

Generally, minikube only creates a single .INFO file per execution, with the format of:

minikube..INFO..

minikube will also symlink the latest .INFO file to /tmp/minikube.INFO. These two behaviors are inherited from https://github.com/golang/glog

If minikube needs to log lines at ERROR or WARNING level, these will be extracted out into .ERROR and .WARNING files appropriately. At maximum, you should be seeing 3 files per minikube execution, plus a set of symlinks to the latest run.

We could presumably change these behaviors by switching to a different logging library, but I'm not yet convinced that it is of great benefit to change the behavior. Will leave this bug open though just in case.

FWIW, on my workstation where I execute minikube dozens of times a day, I had 142 files in /tmp. My OS also automatically cleans up stale /tmp files, however.

@tstromberg tstromberg added priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. and removed co/kvm2-driver KVM2 driver related issues kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. os/linux labels Jan 24, 2019
@tstromberg tstromberg changed the title minikube creating >1000 files in /tmp minikube creating >1000 files in /tmp (1-3 per run) Jan 24, 2019
@springstim
Copy link

@paulcarlton,

I noticed that minikube on my system was creating a set of three (INFO, ERROR and WARNING files) in /tmp/ pretty much every minute, sometimes several sets per minute. Eventually some kube operations would take seemingly forever to return and I started having other system issues (in my case it was a VM running minikube)... and I found loads (hundreds of thousands in my case) of these files in /tmp.

I think the problem is that the minikube executable does its business launching the cluster and then exits - but in my systemd configuration I had set the service to Restart=always with a restartSec=10. So I assume what was happening was minikube was being restarted 10 seconds after it exited... and creating a new set of log files for every re-launch.

I don't recall at this point if I originally manually created the systemd script (CentOS 7) in /usr/lib/systemd/system/minikube.service for systemd or if it was part of some package installation. Probably I manually crafted it and had erroneously set Restart=always. Surely this was causing all sorts of unnecessary chaos for me. You might want to doublecheck that minikube isn't being continually restarted such as by systemd as I described above.

Presently I have the systemd service set (with vm-driver=none for Docker-only) as follows with minikube installed in /usr/local/bin:

[Service]
Type=simple
ExecStart=/usr/local/bin/minikube start --vm-driver=none
Restart=no
StartLimitInterval=0
RestartSec=30
ExecStop=/usr/local/bin/minikube stop

Additional suggestions for better tuning of the systemd startup file for minikube are welcome. I didn't see a suggested systemd config with the minikube installation notes, but perhaps I overlooked it.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 2, 2019
@tstromberg tstromberg added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. r/2019q2 Issue was last reviewed 2019q2 and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 22, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 20, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 19, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@belimawr
Copy link

belimawr commented Nov 7, 2019

I'm having a similar issue with the difference that I'm not running minikube (or at least I thought I wasn't), I only have it installed.

After reading all comments here something crossed my mind: I'm running zsh and oh-my-zsh with the minikube plugin. What happens is that every time I open a new terminal the plugin runs minikube completion zsh and new log files are created. As I use a tilling window manager (i3) I am opening and closing terminals all the time, hence generating dozens of log files after a few days without cleaning my /tmp.

@tstromberg
Copy link
Contributor

tstromberg commented Nov 7, 2019 via email

@k8s-ci-robot
Copy link
Contributor

@tstromberg: Reopened this issue.

In response to this:

/reopen

On Thu, Nov 7, 2019, 2:17 PM Tiago Queiroz notifications@github.com wrote:

I'm having a similar issue with the difference that I'm not running
minikube (or at least I thought I wasn't), I only have it installed.

After reading all comments here something crossed my mind: I'm running zsh
and oh-my-zsh with the minikube plugin. What happens is that every time I
open a new terminal the plugin runs minikube completion zsh and new log
files are created. As I use a tilling window manager (i3) I am opening and
closing terminals all the time, hence generating dozens of log files after
a few days without cleaning my /tmp.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#2918,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAAYYMBWEZWLCB4HORK6I33QSSHZDANCNFSM4FF3LC5A
.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this Nov 7, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@plantusd
Copy link

I don't even have minikube installed on my system, but it indeed creates hundreds files per minute, anyone has come across this issue?

@loopingz
Copy link

loopingz commented Oct 12, 2020

I have similar issue and I think it is related to vscode cloud-code extension .cache/cloud-code/installer/google-cloud-sdk/bin/minikube

GoogleCloudPlatform/cloud-code-vscode#286

@jwuman
Copy link

jwuman commented Oct 23, 2020

Kill the VScode and no more /tmp/minikube*x1000 files

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. r/2019q2 Issue was last reviewed 2019q2
Projects
None yet
Development

No branches or pull requests

10 participants