-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cache: Only use docker client for docker runtime #8576
Conversation
Previously _always_ asking the docker daemon Including for other runtimes, such as podman
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: afbjorklund The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
I wonder if this fixes the crio cache issue #8554 |
/ok-to-test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
approved with a question, that needs answer
if cr.Name() == "Docker" { | ||
imgClient, err = client.NewClientWithOpts(client.FromEnv) // image client | ||
if err != nil { | ||
glog.Infof("couldn't get a local image daemon which might be ok: %v", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you still want to set it as nil ?
imgClient = nil
or that one is not needed anymore ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is still set as nil, if it cannot connect to the docker daemon (for whatever reason)...
The main change is that now I also set it to nil (don't ask docker), when using podman.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I know, this docker client is only used to query for image ID
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the other runtimes, we use the stupid way of doing save and reading the manifest back again
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure that asking the docker daemon makes sense for anything but driver "none" anyway ?
kvm2 Driver |
Unfortunately not, that one is about "localhost/" |
But there's another similar place, so maybe I should create an issue to collect both PRs. It's the
This one fails rather quickly, at least when we have stopped the docker daemon first.
Otherwise it waits for the API negotiation, and then the |
I see crio test failure in docker https://www.instagram.com/p/B1J2PylDF8vIjNN0GVTvkiyPcaC11O6_vSnKwM0/?igshid=1l0e40a3m58ge |
Test failure seems unrelated to image lookup:
|
And this whole function still looks odd, even when running with Docker it connects to the host daemon: imgClient.NewClientWithOpts: imgClient.NegotiateAPIVersion retrieving image: k8s.gcr.io/kube-proxy:v1.18.3 So we will round-trip to the registry (with remote.Image) in the end, just to get the ID for a img reference. Then we compare that, with what is on the machine:
Just in case the tag was changed, like with But we could probably read this image ID from the cache file instead ? |
The other error seems to be a new bug introduced in google/go-containerregistry@4336215 It now negotiates the API version after saving the image, which of course doesn't work out so great:
When switching the order, it returns the "proper" error (which is of course equally silly as it was above)
So when we are done playing games with the docker daemon, then we go back to HTTP again: https://k8s.gcr.io/v2/kube-proxy/manifests/v1.18.3 {
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
"manifests": [
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 1786,
"digest": "sha256:83d42f2ac5e87529792c3bb1a0e606d5d04cfca03cc43d2ab6003e4132d47adb",
"platform": {
"architecture": "amd64",
"os": "linux"
}
},
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 1786,
"digest": "sha256:b1a857a32ba0cb1c8eb878c23b114cec34e920a7c8425ad51788aba053e86784",
"platform": {
"architecture": "arm",
"os": "linux"
}
},
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 1786,
"digest": "sha256:da09914212a84d40ae87eda93a6fb1f238802e771e21744e6d774cca981755bb",
"platform": {
"architecture": "arm64",
"os": "linux"
}
},
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 1786,
"digest": "sha256:94fc59f882483c904218284f0436356a419ff76697eafe589bb7c9a3001be5ec",
"platform": {
"architecture": "ppc64le",
"os": "linux"
}
},
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 1786,
"digest": "sha256:12d38d61902dae7e276857e852c5c3966ea77641c90350fd18e778a0fcc92624",
"platform": {
"architecture": "s390x",
"os": "linux"
}
}
]
} {
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 3362,
"digest": "sha256:3439b7546f29bec22edd737bc0a5770ead18b5ee5ce0aea5af9047a554715f9f"
},
"layers": [
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 21082831,
"digest": "sha256:83b4483280e5187b2801b449338d5755e5874ab80c44bf1ce615d258142e7c8b"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 14624820,
"digest": "sha256:cedd2715c2e49ea291abb2e4b484f8987a863165c42771a34ca9e0f338737bfd"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 2051786,
"digest": "sha256:297e97c9c47212aa4f0fbc63e85b091e7a76ef85e082f9ce5574aca82144a60f"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 1027,
"digest": "sha256:67b649411e755cc189f30bf880c949d1b29b9a16f9588efa2039e6121f437f14"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 509,
"digest": "sha256:d97928a1765f72d088e9aaa86bf42e528abbcc105f97910a2ca949a28286807f"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 545,
"digest": "sha256:ffa39a529ef38619a2018eaae009e3309f0f1e1d5549c38fc3bb976fa4acd57e"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 11496295,
"digest": "sha256:16d39f79015574318902e559855296d6185986be2bf14ddd39d465027359f22d"
}
]
} Every. Single. Time. (thus the "4 seconds") |
Instead we should just load the manifest.json from [
{
"Config": "sha256:3439b7546f29bec22edd737bc0a5770ead18b5ee5ce0aea5af9047a554715f9f",
"RepoTags": [
"k8s.gcr.io/kube-proxy:v1.18.3"
],
"Layers": [
"f2b7e9105f23b8058456935d2519c1ec49d8b0a23e507440cecb4e9dc7a52e84.tar.gz",
"c42eb7c5302e36befb4ec23825c051453797b6d5b68c3ca9ae4b499ad01ce156.tar.gz",
"887ac2e14664738156a467ae51e29312cdfee2c081315b457d9fe7540ba98255.tar.gz",
"9131e15386636e08255369d26df689645169ee2e691df866a103df6aba576529.tar.gz",
"33ca93f1f4402fda10dbd99b37c6b3a07b8b34576e38bf7fe203591c97b7bb36.tar.gz",
"eb382d341c00861f68d1b7beb04a7603629583bb009e12cb8ec0e8047bc9c66b.tar.gz",
"d9bb8a51acbe01fa40d2094d4bb316627729ccce6793beecb047bc6f4193d84c.tar.gz"
]
}
] 660 bytes, from local cache file. |
/retest-this-please |
Previously always asking the docker daemon
Including for other runtimes, such as podman