order | title |
---|---|
1 |
FAQ |
You will have to use the module-level include
directive to specify which files belong to each module. You will also have to provide the path to the Dockerfile with the dockerfile
directive.
If the module only has a Dockerfile but no other files, say because it's a 3rd party image, you should set include: []
.
See this section of our docs for more.
Yes.
You can use the disabled
field to disable modules, services, tests, and tasks.
Both, actually.
When building: If the image
field is specified and Garden can't find a Dockerfile for the module, Garden will use that image when deploying the module. If there is a Dockerfile, Garden will build the image from it, regardless of whether or not the image
field is specified.
When publishing: If the image
field is specified and the module has a Dockerfile, Garden will build the image from the Dockerfile and publish it to the URL specified in the image
field. If there's no Dockerfile, the publish
command will fail.
We aim to change to this behavior and make it more user-friendly with our next major release.
When should I use the module-level include
/exclude
fields? How are they different from the project-level module.include/module.exclude
fields? What about ignore files?
Read all about it in this section of our docs.
We recommend using the Terraform module for cloud services that are shared by your team.
You can also deploy kubernetes
and helm
modules to their own namespaces.
You can use the copy directive of the build.dependencies[]
field for that. See e.g. this example project.
Alternatively you can hoist your garden.yml
file so that it is at the same level or parent to all relevant build context and use the include
field.
See this GitHub issue for more discussion on the two approaches.
What do all those v-<something>
versions mean, and why are they sometimes different between building and deploying?
These are the Garden versions that are computed for each node in the Stack Graph at runtime, based on source files and configuration for each module, service, task and test. See here for more information about how these work and how they're used.
You may notice that a build version (e.g. an image tag for a container
module) is generally different from the version of a service defined in the same module. This is because the service version also factors in the runtime configuration for that service, which often differs between environments, but we don't want those changes to require a rebuild of the container image.
Use the targetImage
field.
See this example project.
No, only modules can be build dependencies and runtime outputs come from tasks, tests, and services.
Set the log-level to verbose
or higher. For example:
garden build --log-level verbose
Dockerfiles need to be at the same level as the garden.yml
file for the respective module, or in a child directory.
You can always hoist the garden.yml
file to the same level as the Dockerfile and use the include
directive to tell Garden what other files belong to the module. For example, if you have the following directory structure:
.
├── api
├── dockerfiles
│ ├── api.Dockerfile
│ └── frontend.Dockerfile
└── frontend
You can set your garden.yml
file at the root and define your modules likes so:
kind: Module
name: api
dockerfile: dockerfiles/api.Dockerfile
include: [api/**/*]
---
kind: Module
name: frontend
dockerfile: dockerfiles/frontend.Dockerfile
include: [frontend/**/*]
Note that you can put multiple Garden configuration files in the same directory, e.g. project.garden.yml
, api.garden.yml
and frontend.garden.yml
.
If you need the Dockerfile outside of the module root because you want to share it with other modules, you should consider having a single base image instead and then let each module have its own Dockerfile that's built on the base image. See the base image example project for an example of this.
How do I include files/dirs (e.g. shared libraries) from outside the module root with the build context?
See this example project.
Use the module-level extraFlags
field.
You can use the dockerfile
field. For example:
dockerfile: "${environment.name == 'prod' ? Dockerfile.prod : Dockerfile.dev}"
See also the base image example project for an example of this.
Please do not delete the garden-system
namespace directly, because Kubernetes may fail to remove persistent volumes. Instead, use this command:
garden plugins kubernetes uninstall-garden-services --env <env-name>
It removes all cluster-wide Garden services.
How do I pull a base image (using the FROM directive) from a private registry in in-cluster build mode?
See this section of our docs.
See this section of our docs.
We've been pondering this, but there are a lot of variants to consider. The key issue is really that the notion of "first time" is kind of undefined as things stand.
So what we generally do is to make sure tasks are idempotent and exit early if they shouldn't run again. But that means the process still needs to be started, which is of course slower than not doing it at all.
It is, which is why we recommend that tasks are written to be idempotent. Tasks by nature don’t really have a status check, unlike services.
This is intentional, we don't re-run tasks on file watch events. We debated this behavior quite a bit and ultimately opted not to run task dependencies on every watch event.
The task result is likely cached. Garden won't run tasks with cached results unless cacheResult: false
is set on the task definition.
You can also run it manually with:
garden run <task-name>
This will run the task even if the result is cached.
Garden stores the task results as a ConfigMap under the <project-name>--metadata
namespace. You can delete them manually with this command:
kubectl delete -n <project-name>--metadata $(kubectl get configmap -n <project-name>--metadata -o name | grep task-result)
You can also run it manually with:
garden run <task-name>
This will run the task even if the result is cached.
See this section of our docs.
You'll need to use the kubernetes
or helm
module types for that. Here's the official Kubernetes guide for mounting secrets as files.
No, Kubernetes secrets can only be used at runtime, by referencing them in the environment
field of tasks
, services
and tests
. See the secrets section of our docs for more.
Also note that secrets as buildArgs
are considered a bad practice and a security risk.
No, secrets have to be in the same namespace as the project. This is how Kubernetes secrets are designed, see here for reference.
See this section of our docs.
How do I access files that are generated at runtime (e.g. migration files that are checked into version control)?
You can generate the files via a task, store them as artifacts, and copy them from the local artifacts directory. Here's an example of this.
You can also use the persistentvolumeclaim
module type to store data and share it across modules. See this section of our docs for more.
You can set annotations on ingresses under the services[].ingresses[]
field.
Garden interfaces with your cluster via kubectl
and by using the Kubernetes APIs directly and should therefore work with all Kubernetes clusters that implement these. Garden is committed to supporting the latest six stable versions of Kubernetes.
No, you have to use the kubernetes
module type for that.
We're exploring how we can release it incrementally. Please let us know if this is something you're interested in.
The *.local.app.garden
domain resolves to 127.0.0.1 via our DNS provider for convenience. If you want to use a different hostname for local development, you’ll have to add the corresponding entry to your hosts file.
No, it doesn't. See this question above for accessing files that are generated at runtime.
Garden is currently in use by many teams. We don’t have a set date or plan to label it as 1.0, but we don't expect to do it anytime soon. For comparison, very widely used tools like Terraform are still not at 1.0.
We have a team of people working on it full-time, and we make it a priority to address all non-trivial bugs. We’re also happy to help out and answer questions via our Discord community.
Garden is not currently designed to work in air-gapped environments This would require a fair amount of workarounds, unfortunately.