This chart deploys the Anchore Engine docker container image analysis system. Anchore Engine requires a PostgreSQL database (>=9.6) which may be handled by the chart or supplied externally, and executes in a service based architecture utilizing the following Anchore Engine services: External API, SimpleQueue, Catalog, Policy Engine, and Analyzer.
This chart can also be used to install the following Anchore Enterprise services: GUI, RBAC, Reporting, Notifications & On-premises Feeds. Enterprise services require a valid Anchore Enterprise License as well as credentials with access to the private DockerHub repository hosting the images. These are not enabled by default.
Each of these services can be scaled and configured independently.
See Anchore Engine for more project details.
The chart is split into global and service specific configurations for the OSS Anchore Engine, as well as global and services specific configurations for the Enterprise components.
- The
anchoreGlobal
section is for configuration values required by all Anchore Engine components. - The
anchoreEnterpriseGlobal
section is for configuration values required by all Anchore Engine Enterprise components. - Service specific configuration values allow customization for each individual service.
For a description of each component, view the official documentation at: Anchore Enterprise Service Overview
TL;DR - helm install stable/anchore-engine
Anchore Engine will take approximately 3 minutes to bootstrap. After the initial bootstrap period, Anchore Engine will begin a vulnerability feed sync. During this time, image analysis will show zero vulnerabilities until the sync is completed. This sync can take multiple hours depending on which feeds are enabled. The following anchore-cli command is available to poll the system and report back when the engine is bootstrapped and the vulnerability feeds are all synced up. anchore-cli system wait
The recommended way to install the Anchore Engine Helm Chart is with a customized values file and a custom release name. It is highly recommended to set non-default passwords when deploying, all passwords are set to defaults specified in the chart. It is also recommended to utilize an external database, rather then using the included postgresql chart.
Create a new file named anchore_values.yaml
and add all desired custom values (examples below); then run the following command:
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm install <release_name> -f anchore_values.yaml stable/anchore-engine
Note: Installs with chart managed PostgreSQL database. This is not a guaranteed production ready config.
## anchore_values.yaml
postgresql:
postgresPassword: <PASSWORD>
persistence:
size: 50Gi
anchoreGlobal:
defaultAdminPassword: <PASSWORD>
defaultAdminEmail: <EMAIL>
The following features are available to Anchore Enterprise customers. Please contact the Anchore team for more information about getting a license for the enterprise features. Anchore Enterprise Demo
* Role based access control
* LDAP integration
* Graphical user interface
* Customizable UI dashboards
* On-premises feeds service
* Proprietary vulnerability data feed (vulnDB, MSRC)
* Anchore reporting API
* Notifications - Slack, GitHub, Jira, etc
* Microsoft image vulnerability scanning
Enterprise services require an Anchore Enterprise license, as well as credentials with permission to the private docker repositories that contain the enterprise images.
To use this Helm chart with the enterprise services enabled, perform these steps.
-
Create a kubernetes secret containing your license file.
kubectl create secret generic anchore-enterprise-license --from-file=license.yaml=<PATH/TO/LICENSE.YAML>
-
Create a kubernetes secret containing DockerHub credentials with access to the private anchore enterprise repositories.
kubectl create secret docker-registry anchore-enterprise-pullcreds --docker-server=docker.io --docker-username=<DOCKERHUB_USER> --docker-password=<DOCKERHUB_PASSWORD> --docker-email=<EMAIL_ADDRESS>
-
(demo) Install the Helm chart using default values
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm install <release_name> --set anchoreEnterpriseGlobal.enabled=true stable/anchore-engine
-
(production) Install the Helm chart using a custom anchore_values.yaml file - see examples below
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm install <release_name> -f anchore_values.yaml stable/anchore-engine
Note: Installs with chart managed PostgreSQL & Redis databases. This is not a guaranteed production ready config.
## anchore_values.yaml
postgresql:
postgresPassword: <PASSWORD>
persistence:
size: 50Gi
anchoreGlobal:
defaultAdminPassword: <PASSWORD>
defaultAdminEmail: <EMAIL>
enableMetrics: True
anchoreEnterpriseGlobal:
enabled: True
anchore-feeds-db:
postgresPassword: <PASSWORD>
persistence:
size: 20Gi
anchore-ui-redis:
password: <PASSWORD>
As of chart version 1.3.1 deployments to OpenShift are fully supported. Due to permission constraints when utilizing OpenShift, the official RHEL postgresql image must be utilized, which requires custom environment variables to be configured for compatibility with this chart.
Note: Installs with chart managed PostgreSQL database. This is not a guaranteed production ready config.
## anchore_values.yaml
postgresql:
image: registry.access.redhat.com/rhscl/postgresql-96-rhel7
imageTag: latest
extraEnv:
- name: POSTGRESQL_USER
value: anchoreengine
- name: POSTGRESQL_PASSWORD
value: anchore-postgres,123
- name: POSTGRESQL_DATABASE
value: anchore
- name: PGUSER
value: postgres
- name: LD_LIBRARY_PATH
value: /opt/rh/rh-postgresql96/root/usr/lib64
- name: PATH
value: /opt/rh/rh-postgresql96/root/usr/bin:/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
postgresPassword: <PASSWORD>
persistence:
size: 50Gi
anchoreGlobal:
defaultAdminPassword: <PASSWORD>
defaultAdminEmail: <EMAIL>
openShiftDeployment: True
To perform an Enterprise deployment on OpenShift use the following anchore_values.yaml configuration
Note: Installs with chart managed PostgreSQL database. This is not a guaranteed production ready config.
## anchore_values.yaml
postgresql:
image: registry.access.redhat.com/rhscl/postgresql-96-rhel7
imageTag: latest
extraEnv:
- name: POSTGRESQL_USER
value: anchoreengine
- name: POSTGRESQL_PASSWORD
value: anchore-postgres,123
- name: POSTGRESQL_DATABASE
value: anchore
- name: PGUSER
value: postgres
- name: LD_LIBRARY_PATH
value: /opt/rh/rh-postgresql96/root/usr/lib64
- name: PATH
value: /opt/rh/rh-postgresql96/root/usr/bin:/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
postgresPassword: <PASSWORD>
persistence:
size: 20Gi
anchoreGlobal:
defaultAdminPassword: <PASSWORD>
defaultAdminEmail: <EMAIL>
enableMetrics: True
openShiftDeployment: True
anchoreEnterpriseGlobal:
enabled: True
anchore-feeds-db:
image: registry.access.redhat.com/rhscl/postgresql-96-rhel7
imageTag: latest
extraEnv:
- name: POSTGRESQL_USER
value: anchoreengine
- name: POSTGRESQL_PASSWORD
value: anchore-postgres,123
- name: POSTGRESQL_DATABASE
value: anchore
- name: PGUSER
value: postgres
- name: LD_LIBRARY_PATH
value: /opt/rh/rh-postgresql96/root/usr/lib64
- name: PATH
value: /opt/rh/rh-postgresql96/root/usr/bin:/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
postgresPassword: <PASSWORD>
persistence:
size: 50Gi
anchore-ui-redis:
password: <PASSWORD>
See the anchore-engine CHANGELOG for updates to anchore engine.
A Helm post-upgrade hook job has been added starting with Chart version 1.6.0 - this job will shut down all previously running Anchore services and perform the Anchore DB upgrade process using a kubernetes job. The upgrade will only be considered successful when this job completes successfully. Performing an update after v1.6.0 will cause the Helm client to block until the upgrade job completes and the new Anchore service pods are started. To view progress of the upgrade process, tail the logs of the upgrade jobs anchore-engine-upgrade
and anchore-enterprise-upgrade
. These job resources will be removed upon a successful helm upgrade.
Changes with this version include:
- Anchore database upgrades will now be handled using a helm post-upgrade hook job
- Anchore Engine image updated to v0.7.1
- Anchore Enterprise updated to v2.3.0 - see CHANGELOG
- Enterprise deployments now use the
anchore/enterprise
image for all components - Added GitHub advisory feeds
- Added NuGet .NET feeds to Enterprise feed service
- Updated resources to provide better minimum requirements baseline (these are still not production ready)
Changes to the Helm Chart include:
- Anchore Engine image updated to v0.7.0
- Enterprise deployments now use a different image for core anchore-engine services - .Values.anchoreEnterpriseGlobal.engineImage
- Default feed sync timeout increased to 180s
- Added a optional configuration for including imagePullSecret on all anchore-engine images - .Values.anchoreGlobal.imagePullSecretName
The following features were added with this chart version:
- Enterprise notifications service
- Numerous QOL improvements to the Enterprise UI service
The following features were added with this chart version:
- Allow custom CA certificates for TLS on all system dependencies (postgresql, ldap, registries)
- Customization of the analyzer configuration
- Improved authentication methods, allowing SAML/token based auth
- Enterprise UI reporting improvements
- Enterprise SSO integration
- Enterprise vulnerability data enhancement using VulnDB
Internal Service SSL configuration has been changed to support a global certificate storage secret. When upgrading to v1.3.0 of the chart, make sure the values file is updated appropriately.
anchoreGlobal:
certStoreSecretName: anchore-certs
internalServicesSsl:
enabled: true
verifyCerts: true
certSecretKeyName: anchore.example.com.key
certSecretCertName: anchore.example.com.crt
anchoreGlobal:
internalServicesSslEnabled: true
internalServicesSsl:
verifyCerts: true
certSecret: anchore-certs
certDir: /home/anchore/certs
certSecretKeyName: anchore.example.com.key
certSecretCertName: anchore.example.com.crt
The following features were added with this chart version:
- Rootless UBI 7 base image
- Analyzer image layer caching
- Enterprise UI dashboards
- Enterprise LDAP integration
- Enterprise Reporting API
Scratch volume configs for the analyzer component & the enterprise-feeds component have been moved to the anchoreGlobal section. Update your values.yaml file to reflect this change.
anchoreAnalyzer:
scratchVolume:
mountPath: /analysis_scratch
details:
# Specify volume configuration here
emptyDir: {}
anchoreEnterpriseFeeds:
scratchVolume:
mountPath: /analysis_scratch
details:
# Specify volume configuration here
emptyDir: {}
anchoreGlobal:
scratchVolume:
mountPath: /analysis_scratch
details:
# Specify volume configuration here
emptyDir: {}
Redis dependency chart major version updated to v6.1.3 - check redis chart readme for instructions for upgrade.
The ingress configuration has been consolidated to a single global section. This should make it easier to manage the ingress resource. Before performing an upgrade ensure you update your custom values file to reflect this change.
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: gce
apiPath: /v1/*
uiPath: /*
apiHosts:
- anchore-api.example.com
uiHosts:
- anchore-ui.example.com
The image map has been removed in all configuration sections in favor of individual keys. This should make configuration for tools like skaffold simpler. If using a custom values file, update your image.repository
, image.tag
, & image.pullPolicy
values with image
& imagePullPolicy
.
anchoreGlobal:
image: docker.io/anchore/anchore-engine:v0.3.2
imagePullPolicy: IfNotPresent
anchoreEnterpriseGlobal:
image: docker.io/anchore/enterprise:v0.3.3
imagePullPolicy: IfNotPresent
anchoreEnterpriseUI:
image: docker.io/anchore/enterprise-ui:v0.3.1
imagePullPolicy: IfNotPresent
Ingress resources have been changed to work natively with NGINX ingress controllers. If you're using a different ingress controller update your values.yaml file accordingly. See the Using Ingress configuration section for examples of NGINX & GCE ingress controller configurations.
Service configs have been moved from the anchoreGlobal section, to individual component sections in the values.yaml file. If you're upgrading from a previous install and are using custom ports or serviceTypes, be sure to update your values.yaml file accordingly.
anchoreApi:
service:
type: ClusterIP
port: 8228
Version 0.9.0 of the anchore-engine helm chart includes major changes to the architecture, values.yaml file, as well as introduced Anchore Enterprise components. Due to these changes, it is highly recommended that upgrades are handled with caution. Any custom values.yaml files will also need to be adjusted to match the new structure. Version upgrades have only been validated when upgrading from 0.2.6 -> 0.9.0.
helm upgrade <release_name> stable/anchore-engine
When upgrading the Chart from version 0.2.6 to version 0.9.0, it will take approximately 5 minutes for anchore-engine to upgrade the database. To ensure that the upgrade has completed, run the anchore-cli system status
command and verify the engine & db versions match the output below.
Engine DB Version: 0.0.8
Engine Code Version: 0.3.0
All configurations should be appended to your custom anchore_values.yaml
file and utilized when installing the chart. While the configuration options of Anchore Engine are extensive, the options provided by the chart are:
This configuration allows SSL termination using your chosen ingress controller.
ingress:
enabled: true
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
apiPath: /v1/*
uiPath: /*
apiHosts:
- anchore-api.example.com
uiHosts:
- anchore-ui.example.com
anchoreApi:
service:
type: NodePort
anchoreEnterpriseUi:
service
type: NodePort
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: gce
apiPath: /v1/*
uiPath: /*
apiHosts:
- anchore-api.example.com
uiHosts:
- anchore-ui.example.com
anchoreApi:
service:
type: NodePort
anchoreEnterpriseUi:
service
type: NodePort
anchoreApi:
service:
type: LoadBalancer
Can be used to override the default secrets.yaml provided
anchoreGlobal:
existingSecret: "foo-bar"
Note: it is recommended to use an external Postgresql instance for production installs
postgresql:
postgresPassword: <PASSWORD>
postgresUser: <USER>
postgresDatabase: <DATABASE>
enabled: false
externalEndpoint: <HOSTNAME:5432>
anchoreGlobal:
dbConfig:
ssl: true
## anchore_values.yaml
postgresql:
enabled: false
postgresPassword: <CLOUDSQL-PASSWORD>
postgresUser: <CLOUDSQL-USER>
postgresDatabase: <CLOUDSQL-DATABASE>
cloudsql:
enabled: true
instance: "project:zone:cloudsqlinstancename"
# Optional existing service account secret to use.
useExistingServiceAcc: true
serviceAccSecretName: my_service_acc
serviceAccJsonName: for_cloudsql.json
image:
repository: gcr.io/cloudsql-docker/gce-proxy
tag: 1.12
pullPolicy: IfNotPresent
Note: it is recommended to use an external archive driver for production installs.
The archive subsystem of Anchore Engine is what stores large json documents and can consume quite a lot of storage if you analyze a lot of images. A general rule for storage provisioning is 10MB per image analyzed, so with thousands of analyzed images, you may need many gigabytes of storage. The Archive drivers now support other backends than just postgresql, so you can leverage external and scalable storage systems and keep the postgresql storage usage to a much lower level.
The archive system has compression available to help reduce size of objects and storage consumed in exchange for slightly slower performance and more cpu usage. There are two config values:
To toggle on/off (default is True), and set a minimum size for compression to be used (to avoid compressing things too small to be of much benefit, the default is 100):
anchoreCatalog:
archive:
compression:
enabled=True
min_size_kbytes=100
- S3 - Any AWS s3-api compatible system (e.g. minio, scality, etc)
- OpenStack Swift
- Local FS - A local filesystem on the core pod. Does not handle sharding or replication, so generally only for testing.
- DB - the default postgresql backend
anchoreCatalog:
archive:
storage_driver:
name: 's3'
config:
access_key: 'MY_ACCESS_KEY'
secret_key: 'MY_SECRET_KEY'
#iamauto: True
url: 'https://S3-end-point.example.com'
region: null
bucket: 'anchorearchive'
create_bucket: True
compression:
... # Compression config here
The swift configuration is basically a pass-thru to the underlying pythonswiftclient so it can take quite a few different options depending on your swift deployment and config. The best way to configure the swift driver is by using a custom values.yaml
The Swift driver supports three authentication methods:
- Keystone V3
- Keystone V2
- Legacy (username / password)
anchoreCatalog:
archive:
storage_driver:
name: swift
config:
auth_version: '3'
os_username: 'myusername'
os_password: 'mypassword'
os_project_name: myproject
os_project_domain_name: example.com
os_auth_url: 'foo.example.com:8000/auth/etc'
container: 'anchorearchive'
# Optionally
create_container: True
compression:
... # Compression config here
anchoreCatalog:
archive:
storage_driver:
name: swift
config:
auth_version: '2'
os_username: 'myusername'
os_password: 'mypassword'
os_tenant_name: 'mytenant'
os_auth_url: 'foo.example.com:8000/auth/etc'
container: 'anchorearchive'
# Optionally
create_container: True
compression:
... # Compression config here
anchoreCatalog:
archive:
storage_driver:
name: swift
config:
user: 'user:password'
auth: 'http://swift.example.com:8080/auth/v1.0'
key: 'anchore'
container: 'anchorearchive'
# Optionally
create_container: True
compression:
... # Compression config here
This is the default archive driver and requires no additional configuration.
Anchore Engine supports exporting prometheus metrics form each container. To enable metrics:
anchoreGlobal:
enableMetrics: True
When enabled, each service provides the metrics over the existing service port so your prometheus deployment will need to know about each pod and the ports it provides to scrape the metrics.
A secret needs to be created in the same namespace as the anchore-engine chart installation. This secret should contain all custom certs, including CA certs & any certs used for internal TLS communication. This secret will be mounted to all anchore-engine pods at /home/anchore/certs to be utilized by the system.
Anchore Engine in v0.2.3 introduces a new events subsystem that exposes system-wide events via both a REST api as well as via webhooks. The webhooks support filtering to ensure only certain event classes result in webhook calls to help limit the volume of calls if you desire. Events, and all webhooks, are emitted from the core components, so configuration is done in the coreConfig.
To configure the events:
anchoreCatalog:
events:
notification:
enabled:true
level=error
As of Chart version 0.9.0, all services can now be scaled-out by increasing the replica counts. The chart now supports this configuration.
To set a specific number of service containers:
anchoreAnalyzer:
replicaCount: 5
anchorePolicyEngine:
replicaCount: 3
To update the number in a running configuration:
helm upgrade --set anchoreAnalyzer.replicaCount=2 <releasename> stable/anchore-engine -f anchore_values.yaml