-
Notifications
You must be signed in to change notification settings - Fork 213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ErrImageNeverPull with trivy.command = filesystem or rootfs #1978
Comments
@chary1112004 thanks for reporting this issue, I have never experienced it and I'll have to investigate it and update you |
@chary1112004 tried to investigate this, however no luck , I'm unable to reproduce it. |
@chen-keinan you have tried to reproduce when deploy trivy to eks? |
@chary1112004 nope, but I do not think its related to cloud provider setting, its look like cluster config in a way. |
@chen-keinan sorry, I just mean kubernetes |
I also get this on EKS when using Bottlerocket nodes (no idea if normal AL23 nodes also have it). |
Also happens in a disconnected openshift environment. I also have |
I am also seeing this, let me know if I can provide any configuration details:
|
I have the same issue. The cluster is running ---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: trivy-system
resources:
- trivy-operator.yml
- https://raw.githubusercontent.com/aquasecurity/trivy-operator/v0.21.1/deploy/static/trivy-operator.yaml
patches:
- patch: |-
- op: replace
path: /data/OPERATOR_METRICS_EXPOSED_SECRET_INFO_ENABLED
value: "false"
- op: replace
path: /data/OPERATOR_METRICS_CONFIG_AUDIT_INFO_ENABLED
value: "false"
- op: replace
path: /data/OPERATOR_METRICS_RBAC_ASSESSMENT_INFO_ENABLED
value: "false"
- op: replace
path: /data/OPERATOR_METRICS_INFRA_ASSESSMENT_INFO_ENABLED
value: "false"
- op: replace
path: /data/OPERATOR_METRICS_IMAGE_INFO_ENABLED
value: "false"
- op: replace
path: /data/OPERATOR_METRICS_CLUSTER_COMPLIANCE_INFO_ENABLED
value: "false"
- op: replace
path: /data/OPERATOR_CONCURRENT_SCAN_JOBS_LIMIT
value: "3"
target:
kind: ConfigMap
name: trivy-operator-config
- patch: |-
- op: replace
path: /data/trivy.command
value: "rootfs"
target:
kind: ConfigMap
name: trivy-operator-trivy-config
- patch: |-
- op: replace
path: /data/scanJob.podTemplateContainerSecurityContext
value: "{\"allowPrivilegeEscalation\":false,\"capabilities\":{\"drop\":[\"ALL\"]},\"privileged\":false,\"readOnlyRootFilesystem\":true,\"runAsUser\":0}"
target:
kind: ConfigMap
name: trivy-operator |
I did a hard-restart of the cluster (rebooted all nodes, deleted & re-created all pods) and it seems to have fixed the issue for me. |
What steps did you take and what happened:
Hi,
We saw there is issue when we configure
trivy.command = filesystem
ortrivy.command = rootfs
then sometimesscan job
appear statusErrImageNeverPull
.Here is log of
scan job
And this is message when we describe scan pod
Any suggestion to resolve this issue would be very much appreciated!
Thanks!
Environment:
trivy-operator version
): 0.18.3kubectl version
): 1.25The text was updated successfully, but these errors were encountered: