Verdaccio is a lightweight private NPM proxy registry.
helm repo add verdaccio-gke-charts https://xlts-dev.github.io/verdaccio-gke-charts
helm repo update
helm install verdaccio-gke-charts/verdaccio-gke-charts
This chart bootstraps a Verdaccio deployment on a Kubernetes cluster using the Helm package manager on GKE.
- Create a Google CLoud Project
- Install the gcloud SDK and CLI or use the Cloud Shell
- If using the CLI outside of Cloud Shell, run
gcloud config set project your-project
- Create a GKE cluster
- Pick the cluster to work with using a command like
gcloud container clusters get-credentials verdaccio-autopilot-cluster --region=us-central1
- Create a namespace (recommended)
kubectl create namespace registry
- If not in the Cloud Shell, Install Helm v3+
helm repo add verdaccio-gke-charts https://xlts-dev.github.io/verdaccio-gke-charts
In this example we use npm
as release name:
# Helm v3+
helm install npm --namespace registry verdaccio-gke-charts/verdaccio-gke-charts
helm install npm --set image.tag=4.6.2 verdaccio-gke-charts/verdaccio-gke-charts
helm upgrade npm --namespace registry verdaccio-gke-charts/verdaccio-gke-charts
The command deploys Verdaccio on the GKE cluster in the default configuration. The configuration section lists the parameters that can be configured during installation.
Tip: List all releases using
helm list
.
To uninstall/delete the npm
deployment:
helm uninstall npm --namespace registry
The command removes all the GKE and GCE components associated with the chart and deletes the release.
The following table lists the configurable parameters of the Verdaccio chart, and their default values.
Parameter | Description | Default | Reference |
---|---|---|---|
affinity |
Affinity for pod assignment | {} |
|
existingConfigMap |
Name of custom ConfigMap to use | false |
|
image.pullPolicy |
Image pull policy | IfNotPresent |
|
image.pullSecrets |
Image pull secrets | [] |
|
image.repository |
Verdaccio container image repository | verdaccio/verdaccio |
|
image.tag |
Verdaccio container image tag | 5.14.0 |
|
nodeSelector |
Node labels for pod assignment | {} |
|
tolerations |
List of node taints to tolerate | [] |
|
persistence.accessMode |
PVC Access Mode for Verdaccio volume | ReadWriteOnce |
|
persistence.enabled |
Enable persistence using PVC | true |
|
persistence.existingClaim |
Use existing PVC | nil |
|
persistence.mounts |
Additional mounts | nil |
|
persistence.size |
PVC Storage Request for Verdaccio volume | 8Gi |
|
persistence.storageClass |
PVC Storage Class for Verdaccio volume | nil |
|
persistence.selector |
Selector to match an existing Persistent Volume | {} (evaluated as a template) |
|
persistence.volumes |
Additional volumes | nil |
|
podLabels |
Additional pod labels | {} (evaluated as a template) |
|
podAnnotations |
Annotations to add to each pod | {} |
|
priorityClass.enabled |
Enable specifying pod priorityClassName | false |
|
priorityClass.name |
PriorityClassName to be specified in pod spec | "" |
|
replicaCount |
Desired number of pods, has no effect when autoscaler is enabled |
1 |
|
resources |
CPU/Memory resource requests/limits | {} |
1 |
service.annotations |
Annotations to add to service | none | |
service.clusterIP |
IP address to assign to service | "" |
|
service.externalIPs |
Service external IP addresses | [] |
|
service.loadBalancerIP |
IP address to assign to load balancer (if supported) | "" |
|
service.loadBalancerSourceRanges |
List of IP CIDRs allowed access to load balancer (if supported) | [] |
|
service.port |
Service port to expose | 80 |
|
service.targetPort |
Container port to target | 4873 |
|
service.type |
Type of service to create | ClusterIP |
|
serviceAccount.create |
Create service account | false |
|
serviceAccount.name |
Service account Name | none | |
extraEnvVars |
Define environment variables to be passed to the container | {} |
|
extraInitContainers |
Define additional initContainers to be added to the deployment | [] |
|
securityContext |
Define Container Security Context | {runAsUser=10001} |
1 |
podSecurityContext |
Define Pod Security Context | {fsGroup=101} |
1 |
nameOverride |
Set resource name override | "" |
|
fullnameOverride |
Set resource fullname override | "" |
|
ingress.enabled |
Enable/Disable Ingress | false |
|
ingress.className |
Ingress Class Name (k8s >=1.18 required) |
"" |
|
ingress.annotations |
Ingress Annotations | {} |
|
ingress.hosts |
List of Ingress Hosts | [] |
|
ingress.paths |
List of Ingress Paths | ["/"] |
|
ingress.extraPaths |
List of extra Ingress Paths | [] |
|
ingress.defaultBackend |
An IngressBackend that will handle requests that don't match any ingress rule |
nil |
1 |
readinessProbe.initialDelaySeconds |
How long after startup before liveness probe is initiated | 5 |
1 |
readinessProbe.timeoutSeconds |
Number of seconds after which the probe times out | 1 |
1 |
readinessProbe.periodSeconds |
How often to perform the probe after startup | 10 |
1 |
readinessProbe.failureThreshold |
Min failures for the probe to be considered failed | 3 |
1 |
readinessProbe.successThreshold |
Min successes for the probe to be considered successful | 1 |
1 |
livenessProbe.initialDelaySeconds |
How long after startup before readiness probe is initiated | 5 |
1 |
livenessProbe.timeoutSeconds |
Number of seconds after which the probe times out | 1 |
1 |
livenessProbe.periodSeconds |
How often to perform the probe after startup | 10 |
1 |
livenessProbe.failureThreshold |
Min failures for the probe to be considered failed | 3 |
1 |
livenessProbe.successThreshold |
Min successes for the probe to be considered successful | 1 |
1 |
autoscaler.enabled |
Whether to enable the HorizontalPodAutoscaler | false |
|
autoscaler.minReplicas |
Lower limit for the number of replicas when scaling down, overrides replicaCount when autoscaler is enabled |
1 |
1 |
autoscaler.maxReplicas |
Upper limit for the number of replicas when scaling up | 1 |
1 |
autoscaler.metrics |
List of MetricSpec objects to trigger scaling |
[] |
1 |
topologySpreadConstraints |
List of TopologySpreadConstraint objects to apply to the pod(s) |
[] |
1 |
Specify each parameter using the --set key=value[,key=value]
argument to helm install
. For example,
helm install my-release --set service.type=LoadBalancer verdaccio-gke-charts/verdaccio-gke-charts
The above command sets the service type LoadBalancer
.
Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
helm install my-release -f values.yaml verdaccio-gke-charts/verdaccio-gke-charts
Tip: You can use the default values.yaml as a starting point.
When creating a new chart with this chart as a dependency, CustomConfigMap can
be used to override the default config.yaml provided. It also allows for
providing additional configuration files that will be copied into
/verdaccio/conf
. In the parent chart's values.yaml, set the value to true and
provide the file templates/config.yaml
for your use case.
The Verdaccio image stores persistence under /verdaccio/storage
path of the
container. A dynamically managed Persistent Volume Claim is used to keep the
data across deployments, by default. This is known to work in GCE, AWS, and
minikube.
Alternatively, a previously configured Persistent Volume Claim can be used.
It is possible to mount several volumes using Persistence.volumes
and
Persistence.mounts
parameters.
- Create the PersistentVolume
- Create the PersistentVolumeClaim
- Install the chart
helm install npm \
--set persistence.existingClaim=PVC_NAME \
verdaccio-gke-charts/verdaccio-gke-charts